00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 262 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.130 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.657 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.669 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.682 Checking out Revision 9b8cb13ca58b20128762541e7d6e360f21b83f5a (FETCH_HEAD) 00:00:05.682 > git config core.sparsecheckout # timeout=10 00:00:05.695 > git read-tree -mu HEAD # timeout=10 00:00:05.715 > git checkout -f 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=5 00:00:05.735 Commit message: "inventory: repurpose WFP74 and WFP75 to dev systems" 00:00:05.735 > git rev-list --no-walk 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=10 00:00:05.841 [Pipeline] Start of Pipeline 00:00:05.853 [Pipeline] library 00:00:05.854 Loading library shm_lib@master 00:00:06.550 Library shm_lib@master is cached. Copying from home. 00:00:06.577 [Pipeline] node 00:00:06.618 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.619 [Pipeline] { 00:00:06.627 [Pipeline] catchError 00:00:06.628 [Pipeline] { 00:00:06.638 [Pipeline] wrap 00:00:06.645 [Pipeline] { 00:00:06.653 [Pipeline] stage 00:00:06.655 [Pipeline] { (Prologue) 00:00:06.848 [Pipeline] sh 00:00:07.734 + logger -p user.info -t JENKINS-CI 00:00:07.759 [Pipeline] echo 00:00:07.761 Node: GP6 00:00:07.768 [Pipeline] sh 00:00:08.097 [Pipeline] setCustomBuildProperty 00:00:08.121 [Pipeline] echo 00:00:08.124 Cleanup processes 00:00:08.135 [Pipeline] sh 00:00:08.421 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.421 3926 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.437 [Pipeline] sh 00:00:08.730 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.730 ++ grep -v 'sudo pgrep' 00:00:08.730 ++ awk '{print $1}' 00:00:08.730 + sudo kill -9 00:00:08.730 + true 00:00:08.746 [Pipeline] cleanWs 00:00:08.757 [WS-CLEANUP] Deleting project workspace... 00:00:08.757 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.773 [WS-CLEANUP] done 00:00:08.777 [Pipeline] setCustomBuildProperty 00:00:08.793 [Pipeline] sh 00:00:09.082 + sudo git config --global --replace-all safe.directory '*' 00:00:09.154 [Pipeline] nodesByLabel 00:00:09.156 Found a total of 1 nodes with the 'sorcerer' label 00:00:09.163 [Pipeline] httpRequest 00:00:09.451 HttpMethod: GET 00:00:09.452 URL: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:10.283 Sending request to url: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:10.617 Response Code: HTTP/1.1 200 OK 00:00:10.708 Success: Status code 200 is in the accepted range: 200,404 00:00:10.709 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:13.260 [Pipeline] sh 00:00:13.560 + tar --no-same-owner -xf jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:13.583 [Pipeline] httpRequest 00:00:13.589 HttpMethod: GET 00:00:13.590 URL: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:13.593 Sending request to url: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:13.612 Response Code: HTTP/1.1 200 OK 00:00:13.613 Success: Status code 200 is in the accepted range: 200,404 00:00:13.614 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:42.089 [Pipeline] sh 00:00:42.381 + tar --no-same-owner -xf spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:44.937 [Pipeline] sh 00:00:45.231 + git -C spdk log --oneline -n5 00:00:45.231 cf8ec7cfe version: 24.09-pre 00:00:45.231 2d6134546 lib/ftl: Handle trim requests without VSS 00:00:45.231 106ad3793 lib/ftl: Rename unmap to trim 00:00:45.231 5555d51c8 lib/ftl: Add means to create new layout regions 00:00:45.231 5d89ebb72 lib/ftl: Add deinit handler to FTL mngt 00:00:45.248 [Pipeline] sh 00:00:45.541 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/50/23150/8 00:00:46.116 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:46.116 * branch refs/changes/50/23150/8 -> FETCH_HEAD 00:00:46.131 [Pipeline] sh 00:00:46.421 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:47.002 Previous HEAD position was 08f3a46de7 pmdinfogen: avoid empty string in ELFSymbol() 00:00:47.002 HEAD is now at 023fd6c428 malloc: fix allocation for a specific case with ASan 00:00:47.015 [Pipeline] } 00:00:47.035 [Pipeline] // stage 00:00:47.044 [Pipeline] stage 00:00:47.046 [Pipeline] { (Prepare) 00:00:47.067 [Pipeline] writeFile 00:00:47.086 [Pipeline] sh 00:00:47.374 + logger -p user.info -t JENKINS-CI 00:00:47.389 [Pipeline] sh 00:00:47.679 + logger -p user.info -t JENKINS-CI 00:00:47.695 [Pipeline] sh 00:00:47.987 + cat autorun-spdk.conf 00:00:47.987 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.987 SPDK_TEST_NVMF=1 00:00:47.987 SPDK_TEST_NVME_CLI=1 00:00:47.987 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.987 SPDK_TEST_NVMF_NICS=e810 00:00:47.987 SPDK_TEST_VFIOUSER=1 00:00:47.987 SPDK_RUN_UBSAN=1 00:00:47.987 NET_TYPE=phy 00:00:47.999 RUN_NIGHTLY= 00:00:48.003 [Pipeline] readFile 00:00:48.056 [Pipeline] withEnv 00:00:48.058 [Pipeline] { 00:00:48.074 [Pipeline] sh 00:00:48.374 + set -ex 00:00:48.374 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:48.374 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.375 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.375 ++ SPDK_TEST_NVMF=1 00:00:48.375 ++ SPDK_TEST_NVME_CLI=1 00:00:48.375 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.375 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.375 ++ SPDK_TEST_VFIOUSER=1 00:00:48.375 ++ SPDK_RUN_UBSAN=1 00:00:48.375 ++ NET_TYPE=phy 00:00:48.375 ++ RUN_NIGHTLY= 00:00:48.375 + case $SPDK_TEST_NVMF_NICS in 00:00:48.375 + DRIVERS=ice 00:00:48.375 + [[ tcp == \r\d\m\a ]] 00:00:48.375 + [[ -n ice ]] 00:00:48.375 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:48.375 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.591 rmmod: ERROR: Module irdma is not currently loaded 00:00:52.591 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.591 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.591 + true 00:00:52.591 + for D in $DRIVERS 00:00:52.591 + sudo modprobe ice 00:00:52.591 + exit 0 00:00:52.602 [Pipeline] } 00:00:52.620 [Pipeline] // withEnv 00:00:52.627 [Pipeline] } 00:00:52.643 [Pipeline] // stage 00:00:52.653 [Pipeline] catchError 00:00:52.655 [Pipeline] { 00:00:52.670 [Pipeline] timeout 00:00:52.670 Timeout set to expire in 40 min 00:00:52.672 [Pipeline] { 00:00:52.687 [Pipeline] stage 00:00:52.689 [Pipeline] { (Tests) 00:00:52.705 [Pipeline] sh 00:00:52.995 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.995 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.995 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.995 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:52.995 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.995 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.995 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:52.995 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.995 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.995 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.995 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.995 + source /etc/os-release 00:00:52.995 ++ NAME='Fedora Linux' 00:00:52.995 ++ VERSION='38 (Cloud Edition)' 00:00:52.995 ++ ID=fedora 00:00:52.995 ++ VERSION_ID=38 00:00:52.995 ++ VERSION_CODENAME= 00:00:52.995 ++ PLATFORM_ID=platform:f38 00:00:52.995 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:52.995 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.995 ++ LOGO=fedora-logo-icon 00:00:52.995 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:52.995 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.995 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:52.995 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.995 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.995 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.996 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:52.996 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.996 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:52.996 ++ SUPPORT_END=2024-05-14 00:00:52.996 ++ VARIANT='Cloud Edition' 00:00:52.996 ++ VARIANT_ID=cloud 00:00:52.996 + uname -a 00:00:52.996 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:52.996 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:53.939 Hugepages 00:00:53.939 node hugesize free / total 00:00:54.201 node0 1048576kB 0 / 0 00:00:54.201 node0 2048kB 0 / 0 00:00:54.201 node1 1048576kB 0 / 0 00:00:54.201 node1 2048kB 0 / 0 00:00:54.201 00:00:54.201 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:54.201 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:54.201 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:54.201 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:54.201 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:54.201 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:54.201 + rm -f /tmp/spdk-ld-path 00:00:54.201 + source autorun-spdk.conf 00:00:54.201 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.201 ++ SPDK_TEST_NVMF=1 00:00:54.201 ++ SPDK_TEST_NVME_CLI=1 00:00:54.201 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.201 ++ SPDK_TEST_NVMF_NICS=e810 00:00:54.201 ++ SPDK_TEST_VFIOUSER=1 00:00:54.201 ++ SPDK_RUN_UBSAN=1 00:00:54.201 ++ NET_TYPE=phy 00:00:54.201 ++ RUN_NIGHTLY= 00:00:54.201 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:54.201 + [[ -n '' ]] 00:00:54.201 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.201 + for M in /var/spdk/build-*-manifest.txt 00:00:54.201 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:54.201 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:54.201 + for M in /var/spdk/build-*-manifest.txt 00:00:54.201 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:54.201 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:54.201 ++ uname 00:00:54.201 + [[ Linux == \L\i\n\u\x ]] 00:00:54.201 + sudo dmesg -T 00:00:54.201 + sudo dmesg --clear 00:00:54.201 + dmesg_pid=4641 00:00:54.201 + [[ Fedora Linux == FreeBSD ]] 00:00:54.201 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:54.201 + sudo dmesg -Tw 00:00:54.201 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:54.201 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:54.201 + [[ -x /usr/src/fio-static/fio ]] 00:00:54.201 + export FIO_BIN=/usr/src/fio-static/fio 00:00:54.201 + FIO_BIN=/usr/src/fio-static/fio 00:00:54.201 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:54.201 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:54.201 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:54.201 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:54.201 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:54.201 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:54.201 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:54.201 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:54.201 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.201 Test configuration: 00:00:54.201 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.201 SPDK_TEST_NVMF=1 00:00:54.201 SPDK_TEST_NVME_CLI=1 00:00:54.201 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.201 SPDK_TEST_NVMF_NICS=e810 00:00:54.201 SPDK_TEST_VFIOUSER=1 00:00:54.201 SPDK_RUN_UBSAN=1 00:00:54.201 NET_TYPE=phy 00:00:54.201 RUN_NIGHTLY= 19:59:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:54.201 19:59:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:54.201 19:59:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:54.201 19:59:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:54.201 19:59:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.201 19:59:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.201 19:59:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.201 19:59:41 -- paths/export.sh@5 -- $ export PATH 00:00:54.201 19:59:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.201 19:59:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:54.201 19:59:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:54.464 19:59:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715882381.XXXXXX 00:00:54.464 19:59:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715882381.hXowcp 00:00:54.464 19:59:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:54.464 19:59:41 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:54.464 19:59:41 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:54.464 19:59:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:54.464 19:59:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:54.464 19:59:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:54.464 19:59:41 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:54.464 19:59:41 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.464 19:59:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:54.464 19:59:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:54.464 19:59:41 -- pm/common@17 -- $ local monitor 00:00:54.464 19:59:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.464 19:59:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.464 19:59:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.464 19:59:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.464 19:59:41 -- pm/common@21 -- $ date +%s 00:00:54.464 19:59:41 -- pm/common@25 -- $ sleep 1 00:00:54.464 19:59:41 -- pm/common@21 -- $ date +%s 00:00:54.464 19:59:41 -- pm/common@21 -- $ date +%s 00:00:54.464 19:59:41 -- pm/common@21 -- $ date +%s 00:00:54.464 19:59:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882381 00:00:54.464 19:59:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882381 00:00:54.464 19:59:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882381 00:00:54.464 19:59:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882381 00:00:54.464 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882381_collect-vmstat.pm.log 00:00:54.465 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882381_collect-cpu-load.pm.log 00:00:54.465 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882381_collect-cpu-temp.pm.log 00:00:54.465 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882381_collect-bmc-pm.bmc.pm.log 00:00:55.411 19:59:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:55.411 19:59:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:55.411 19:59:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:55.411 19:59:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.411 19:59:42 -- spdk/autobuild.sh@16 -- $ date -u 00:00:55.411 Thu May 16 05:59:42 PM UTC 2024 00:00:55.411 19:59:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:55.411 v24.09-pre 00:00:55.411 19:59:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:55.411 19:59:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:55.411 19:59:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:55.411 19:59:42 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:55.411 19:59:42 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:55.411 19:59:42 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.411 ************************************ 00:00:55.411 START TEST ubsan 00:00:55.411 ************************************ 00:00:55.411 19:59:42 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:55.411 using ubsan 00:00:55.411 00:00:55.411 real 0m0.000s 00:00:55.411 user 0m0.000s 00:00:55.411 sys 0m0.000s 00:00:55.411 19:59:42 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:55.411 19:59:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:55.411 ************************************ 00:00:55.411 END TEST ubsan 00:00:55.411 ************************************ 00:00:55.411 19:59:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:55.411 19:59:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:55.411 19:59:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:55.411 19:59:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:55.411 19:59:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:55.411 19:59:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:55.411 19:59:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:55.411 19:59:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:55.411 19:59:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:55.979 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:55.979 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:56.918 Using 'verbs' RDMA provider 00:01:10.109 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.087 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.087 Creating mk/config.mk...done. 00:01:20.087 Creating mk/cc.flags.mk...done. 00:01:20.087 Type 'make' to build. 00:01:20.087 20:00:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:20.087 20:00:06 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:20.087 20:00:06 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:20.087 20:00:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.087 ************************************ 00:01:20.087 START TEST make 00:01:20.087 ************************************ 00:01:20.087 20:00:06 make -- common/autotest_common.sh@1121 -- $ make -j48 00:01:20.087 make[1]: Nothing to be done for 'all'. 00:01:22.649 The Meson build system 00:01:22.649 Version: 1.3.1 00:01:22.649 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:22.649 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:22.649 Build type: native build 00:01:22.649 Project name: libvfio-user 00:01:22.649 Project version: 0.0.1 00:01:22.649 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.649 C linker for the host machine: cc ld.bfd 2.39-16 00:01:22.649 Host machine cpu family: x86_64 00:01:22.649 Host machine cpu: x86_64 00:01:22.649 Run-time dependency threads found: YES 00:01:22.649 Library dl found: YES 00:01:22.649 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.649 Run-time dependency json-c found: YES 0.17 00:01:22.649 Run-time dependency cmocka found: YES 1.1.7 00:01:22.649 Program pytest-3 found: NO 00:01:22.649 Program flake8 found: NO 00:01:22.649 Program misspell-fixer found: NO 00:01:22.649 Program restructuredtext-lint found: NO 00:01:22.649 Program valgrind found: YES (/usr/bin/valgrind) 00:01:22.649 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.649 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.649 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.649 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:22.649 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:22.649 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:22.649 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:22.649 Build targets in project: 8 00:01:22.649 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:22.649 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:22.649 00:01:22.649 libvfio-user 0.0.1 00:01:22.649 00:01:22.649 User defined options 00:01:22.649 buildtype : debug 00:01:22.649 default_library: shared 00:01:22.649 libdir : /usr/local/lib 00:01:22.649 00:01:22.649 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.942 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:23.227 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:23.227 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:23.227 [3/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:23.227 [4/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:23.227 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:23.489 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:23.489 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:23.489 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:23.489 [9/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:23.489 [10/37] Compiling C object samples/server.p/server.c.o 00:01:23.489 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:23.489 [12/37] Compiling C object samples/null.p/null.c.o 00:01:23.489 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:23.489 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:23.489 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:23.489 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:23.489 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:23.489 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:23.489 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:23.489 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:23.489 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:23.489 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:23.489 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:23.489 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:23.489 [25/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:23.489 [26/37] Compiling C object samples/client.p/client.c.o 00:01:23.489 [27/37] Linking target samples/client 00:01:23.754 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:23.754 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:23.754 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:23.754 [31/37] Linking target test/unit_tests 00:01:24.016 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:24.016 [33/37] Linking target samples/server 00:01:24.016 [34/37] Linking target samples/gpio-pci-idio-16 00:01:24.016 [35/37] Linking target samples/lspci 00:01:24.016 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:24.016 [37/37] Linking target samples/null 00:01:24.016 INFO: autodetecting backend as ninja 00:01:24.016 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.276 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.844 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.844 ninja: no work to do. 00:01:28.128 The Meson build system 00:01:28.128 Version: 1.3.1 00:01:28.128 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:28.128 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:28.128 Build type: native build 00:01:28.128 Program cat found: YES (/usr/bin/cat) 00:01:28.128 Project name: DPDK 00:01:28.128 Project version: 24.03.0 00:01:28.128 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.128 C linker for the host machine: cc ld.bfd 2.39-16 00:01:28.128 Host machine cpu family: x86_64 00:01:28.128 Host machine cpu: x86_64 00:01:28.128 Message: ## Building in Developer Mode ## 00:01:28.128 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.128 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:28.128 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.128 Program python3 found: YES (/usr/bin/python3) 00:01:28.128 Program cat found: YES (/usr/bin/cat) 00:01:28.128 Compiler for C supports arguments -march=native: YES 00:01:28.128 Checking for size of "void *" : 8 00:01:28.128 Checking for size of "void *" : 8 (cached) 00:01:28.128 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:28.128 Library m found: YES 00:01:28.128 Library numa found: YES 00:01:28.128 Has header "numaif.h" : YES 00:01:28.128 Library fdt found: NO 00:01:28.128 Library execinfo found: NO 00:01:28.128 Has header "execinfo.h" : YES 00:01:28.128 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.128 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.128 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.128 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.128 Run-time dependency openssl found: YES 3.0.9 00:01:28.128 Run-time dependency libpcap found: YES 1.10.4 00:01:28.128 Has header "pcap.h" with dependency libpcap: YES 00:01:28.128 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.129 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.129 Compiler for C supports arguments -Wformat: YES 00:01:28.129 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.129 Compiler for C supports arguments -Wformat-security: NO 00:01:28.129 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.129 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.129 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.129 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.129 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.129 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.129 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.129 Compiler for C supports arguments -Wundef: YES 00:01:28.129 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.129 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.129 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.129 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.129 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.129 Program objdump found: YES (/usr/bin/objdump) 00:01:28.129 Compiler for C supports arguments -mavx512f: YES 00:01:28.129 Checking if "AVX512 checking" compiles: YES 00:01:28.129 Fetching value of define "__SSE4_2__" : 1 00:01:28.129 Fetching value of define "__AES__" : 1 00:01:28.129 Fetching value of define "__AVX__" : 1 00:01:28.129 Fetching value of define "__AVX2__" : (undefined) 00:01:28.129 Fetching value of define "__AVX512BW__" : (undefined) 00:01:28.129 Fetching value of define "__AVX512CD__" : (undefined) 00:01:28.129 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:28.129 Fetching value of define "__AVX512F__" : (undefined) 00:01:28.129 Fetching value of define "__AVX512VL__" : (undefined) 00:01:28.129 Fetching value of define "__PCLMUL__" : 1 00:01:28.129 Fetching value of define "__RDRND__" : 1 00:01:28.129 Fetching value of define "__RDSEED__" : (undefined) 00:01:28.129 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.129 Fetching value of define "__znver1__" : (undefined) 00:01:28.129 Fetching value of define "__znver2__" : (undefined) 00:01:28.129 Fetching value of define "__znver3__" : (undefined) 00:01:28.129 Fetching value of define "__znver4__" : (undefined) 00:01:28.129 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.129 Message: lib/log: Defining dependency "log" 00:01:28.129 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.129 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.129 Checking for function "getentropy" : NO 00:01:28.129 Message: lib/eal: Defining dependency "eal" 00:01:28.129 Message: lib/ring: Defining dependency "ring" 00:01:28.129 Message: lib/rcu: Defining dependency "rcu" 00:01:28.129 Message: lib/mempool: Defining dependency "mempool" 00:01:28.129 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.129 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.129 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.129 Compiler for C supports arguments -mpclmul: YES 00:01:28.129 Compiler for C supports arguments -maes: YES 00:01:28.129 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.129 Compiler for C supports arguments -mavx512bw: YES 00:01:28.129 Compiler for C supports arguments -mavx512dq: YES 00:01:28.129 Compiler for C supports arguments -mavx512vl: YES 00:01:28.129 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.129 Compiler for C supports arguments -mavx2: YES 00:01:28.129 Compiler for C supports arguments -mavx: YES 00:01:28.129 Message: lib/net: Defining dependency "net" 00:01:28.129 Message: lib/meter: Defining dependency "meter" 00:01:28.129 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.129 Message: lib/pci: Defining dependency "pci" 00:01:28.129 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.129 Message: lib/hash: Defining dependency "hash" 00:01:28.129 Message: lib/timer: Defining dependency "timer" 00:01:28.129 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.129 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.129 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.129 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.129 Message: lib/power: Defining dependency "power" 00:01:28.129 Message: lib/reorder: Defining dependency "reorder" 00:01:28.129 Message: lib/security: Defining dependency "security" 00:01:28.129 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:01:28.129 Message: lib/stack: Defining dependency "stack" 00:01:28.129 Has header "linux/userfaultfd.h" : YES 00:01:28.129 Has header "linux/vduse.h" : YES 00:01:28.129 Message: lib/vhost: Defining dependency "vhost" 00:01:28.129 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:28.129 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:28.129 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:28.129 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:28.129 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:28.129 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:28.129 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:28.129 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:28.129 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:28.129 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:28.129 Program doxygen found: YES (/usr/bin/doxygen) 00:01:28.129 Configuring doxy-api-html.conf using configuration 00:01:28.129 Configuring doxy-api-man.conf using configuration 00:01:28.129 Program mandb found: YES (/usr/bin/mandb) 00:01:28.129 Program sphinx-build found: NO 00:01:28.129 Configuring rte_build_config.h using configuration 00:01:28.129 Message: 00:01:28.129 ================= 00:01:28.129 Applications Enabled 00:01:28.129 ================= 00:01:28.129 00:01:28.129 apps: 00:01:28.129 00:01:28.129 00:01:28.129 Message: 00:01:28.129 ================= 00:01:28.129 Libraries Enabled 00:01:28.129 ================= 00:01:28.129 00:01:28.129 libs: 00:01:28.129 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:28.129 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:28.129 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:01:28.129 00:01:28.129 Message: 00:01:28.129 =============== 00:01:28.129 Drivers Enabled 00:01:28.129 =============== 00:01:28.129 00:01:28.129 common: 00:01:28.129 00:01:28.129 bus: 00:01:28.129 pci, vdev, 00:01:28.129 mempool: 00:01:28.129 ring, 00:01:28.129 dma: 00:01:28.129 00:01:28.129 net: 00:01:28.129 00:01:28.129 crypto: 00:01:28.129 00:01:28.129 compress: 00:01:28.129 00:01:28.129 vdpa: 00:01:28.129 00:01:28.129 00:01:28.129 Message: 00:01:28.129 ================= 00:01:28.129 Content Skipped 00:01:28.129 ================= 00:01:28.129 00:01:28.129 apps: 00:01:28.129 dumpcap: explicitly disabled via build config 00:01:28.129 graph: explicitly disabled via build config 00:01:28.129 pdump: explicitly disabled via build config 00:01:28.129 proc-info: explicitly disabled via build config 00:01:28.129 test-acl: explicitly disabled via build config 00:01:28.129 test-bbdev: explicitly disabled via build config 00:01:28.129 test-cmdline: explicitly disabled via build config 00:01:28.129 test-compress-perf: explicitly disabled via build config 00:01:28.129 test-crypto-perf: explicitly disabled via build config 00:01:28.129 test-dma-perf: explicitly disabled via build config 00:01:28.129 test-eventdev: explicitly disabled via build config 00:01:28.129 test-fib: explicitly disabled via build config 00:01:28.129 test-flow-perf: explicitly disabled via build config 00:01:28.129 test-gpudev: explicitly disabled via build config 00:01:28.129 test-mldev: explicitly disabled via build config 00:01:28.129 test-pipeline: explicitly disabled via build config 00:01:28.129 test-pmd: explicitly disabled via build config 00:01:28.129 test-regex: explicitly disabled via build config 00:01:28.129 test-sad: explicitly disabled via build config 00:01:28.129 test-security-perf: explicitly disabled via build config 00:01:28.129 00:01:28.129 libs: 00:01:28.129 argparse: explicitly disabled via build config 00:01:28.129 metrics: explicitly disabled via build config 00:01:28.129 acl: explicitly disabled via build config 00:01:28.129 bbdev: explicitly disabled via build config 00:01:28.129 bitratestats: explicitly disabled via build config 00:01:28.129 bpf: explicitly disabled via build config 00:01:28.129 cfgfile: explicitly disabled via build config 00:01:28.129 distributor: explicitly disabled via build config 00:01:28.129 efd: explicitly disabled via build config 00:01:28.129 eventdev: explicitly disabled via build config 00:01:28.129 dispatcher: explicitly disabled via build config 00:01:28.129 gpudev: explicitly disabled via build config 00:01:28.129 gro: explicitly disabled via build config 00:01:28.129 gso: explicitly disabled via build config 00:01:28.129 ip_frag: explicitly disabled via build config 00:01:28.129 jobstats: explicitly disabled via build config 00:01:28.129 latencystats: explicitly disabled via build config 00:01:28.129 lpm: explicitly disabled via build config 00:01:28.129 member: explicitly disabled via build config 00:01:28.129 pcapng: explicitly disabled via build config 00:01:28.129 rawdev: explicitly disabled via build config 00:01:28.129 regexdev: explicitly disabled via build config 00:01:28.129 mldev: explicitly disabled via build config 00:01:28.129 rib: explicitly disabled via build config 00:01:28.129 sched: explicitly disabled via build config 00:01:28.129 ipsec: explicitly disabled via build config 00:01:28.129 pdcp: explicitly disabled via build config 00:01:28.129 fib: explicitly disabled via build config 00:01:28.129 port: explicitly disabled via build config 00:01:28.129 pdump: explicitly disabled via build config 00:01:28.129 table: explicitly disabled via build config 00:01:28.129 pipeline: explicitly disabled via build config 00:01:28.129 graph: explicitly disabled via build config 00:01:28.129 node: explicitly disabled via build config 00:01:28.129 00:01:28.129 drivers: 00:01:28.129 common/cpt: not in enabled drivers build config 00:01:28.129 common/dpaax: not in enabled drivers build config 00:01:28.129 common/iavf: not in enabled drivers build config 00:01:28.129 common/idpf: not in enabled drivers build config 00:01:28.129 common/ionic: not in enabled drivers build config 00:01:28.129 common/mvep: not in enabled drivers build config 00:01:28.129 common/octeontx: not in enabled drivers build config 00:01:28.129 bus/auxiliary: not in enabled drivers build config 00:01:28.129 bus/cdx: not in enabled drivers build config 00:01:28.129 bus/dpaa: not in enabled drivers build config 00:01:28.129 bus/fslmc: not in enabled drivers build config 00:01:28.130 bus/ifpga: not in enabled drivers build config 00:01:28.130 bus/platform: not in enabled drivers build config 00:01:28.130 bus/uacce: not in enabled drivers build config 00:01:28.130 bus/vmbus: not in enabled drivers build config 00:01:28.130 common/cnxk: not in enabled drivers build config 00:01:28.130 common/mlx5: not in enabled drivers build config 00:01:28.130 common/nfp: not in enabled drivers build config 00:01:28.130 common/nitrox: not in enabled drivers build config 00:01:28.130 common/qat: not in enabled drivers build config 00:01:28.130 common/sfc_efx: not in enabled drivers build config 00:01:28.130 mempool/bucket: not in enabled drivers build config 00:01:28.130 mempool/cnxk: not in enabled drivers build config 00:01:28.130 mempool/dpaa: not in enabled drivers build config 00:01:28.130 mempool/dpaa2: not in enabled drivers build config 00:01:28.130 mempool/octeontx: not in enabled drivers build config 00:01:28.130 mempool/stack: not in enabled drivers build config 00:01:28.130 dma/cnxk: not in enabled drivers build config 00:01:28.130 dma/dpaa: not in enabled drivers build config 00:01:28.130 dma/dpaa2: not in enabled drivers build config 00:01:28.130 dma/hisilicon: not in enabled drivers build config 00:01:28.130 dma/idxd: not in enabled drivers build config 00:01:28.130 dma/ioat: not in enabled drivers build config 00:01:28.130 dma/skeleton: not in enabled drivers build config 00:01:28.130 net/af_packet: not in enabled drivers build config 00:01:28.130 net/af_xdp: not in enabled drivers build config 00:01:28.130 net/ark: not in enabled drivers build config 00:01:28.130 net/atlantic: not in enabled drivers build config 00:01:28.130 net/avp: not in enabled drivers build config 00:01:28.130 net/axgbe: not in enabled drivers build config 00:01:28.130 net/bnx2x: not in enabled drivers build config 00:01:28.130 net/bnxt: not in enabled drivers build config 00:01:28.130 net/bonding: not in enabled drivers build config 00:01:28.130 net/cnxk: not in enabled drivers build config 00:01:28.130 net/cpfl: not in enabled drivers build config 00:01:28.130 net/cxgbe: not in enabled drivers build config 00:01:28.130 net/dpaa: not in enabled drivers build config 00:01:28.130 net/dpaa2: not in enabled drivers build config 00:01:28.130 net/e1000: not in enabled drivers build config 00:01:28.130 net/ena: not in enabled drivers build config 00:01:28.130 net/enetc: not in enabled drivers build config 00:01:28.130 net/enetfec: not in enabled drivers build config 00:01:28.130 net/enic: not in enabled drivers build config 00:01:28.130 net/failsafe: not in enabled drivers build config 00:01:28.130 net/fm10k: not in enabled drivers build config 00:01:28.130 net/gve: not in enabled drivers build config 00:01:28.130 net/hinic: not in enabled drivers build config 00:01:28.130 net/hns3: not in enabled drivers build config 00:01:28.130 net/i40e: not in enabled drivers build config 00:01:28.130 net/iavf: not in enabled drivers build config 00:01:28.130 net/ice: not in enabled drivers build config 00:01:28.130 net/idpf: not in enabled drivers build config 00:01:28.130 net/igc: not in enabled drivers build config 00:01:28.130 net/ionic: not in enabled drivers build config 00:01:28.130 net/ipn3ke: not in enabled drivers build config 00:01:28.130 net/ixgbe: not in enabled drivers build config 00:01:28.130 net/mana: not in enabled drivers build config 00:01:28.130 net/memif: not in enabled drivers build config 00:01:28.130 net/mlx4: not in enabled drivers build config 00:01:28.130 net/mlx5: not in enabled drivers build config 00:01:28.130 net/mvneta: not in enabled drivers build config 00:01:28.130 net/mvpp2: not in enabled drivers build config 00:01:28.130 net/netvsc: not in enabled drivers build config 00:01:28.130 net/nfb: not in enabled drivers build config 00:01:28.130 net/nfp: not in enabled drivers build config 00:01:28.130 net/ngbe: not in enabled drivers build config 00:01:28.130 net/null: not in enabled drivers build config 00:01:28.130 net/octeontx: not in enabled drivers build config 00:01:28.130 net/octeon_ep: not in enabled drivers build config 00:01:28.130 net/pcap: not in enabled drivers build config 00:01:28.130 net/pfe: not in enabled drivers build config 00:01:28.130 net/qede: not in enabled drivers build config 00:01:28.130 net/ring: not in enabled drivers build config 00:01:28.130 net/sfc: not in enabled drivers build config 00:01:28.130 net/softnic: not in enabled drivers build config 00:01:28.130 net/tap: not in enabled drivers build config 00:01:28.130 net/thunderx: not in enabled drivers build config 00:01:28.130 net/txgbe: not in enabled drivers build config 00:01:28.130 net/vdev_netvsc: not in enabled drivers build config 00:01:28.130 net/vhost: not in enabled drivers build config 00:01:28.130 net/virtio: not in enabled drivers build config 00:01:28.130 net/vmxnet3: not in enabled drivers build config 00:01:28.130 raw/*: missing internal dependency, "rawdev" 00:01:28.130 crypto/armv8: not in enabled drivers build config 00:01:28.130 crypto/bcmfs: not in enabled drivers build config 00:01:28.130 crypto/caam_jr: not in enabled drivers build config 00:01:28.130 crypto/ccp: not in enabled drivers build config 00:01:28.130 crypto/cnxk: not in enabled drivers build config 00:01:28.130 crypto/dpaa_sec: not in enabled drivers build config 00:01:28.130 crypto/dpaa2_sec: not in enabled drivers build config 00:01:28.130 crypto/ipsec_mb: not in enabled drivers build config 00:01:28.130 crypto/mlx5: not in enabled drivers build config 00:01:28.130 crypto/mvsam: not in enabled drivers build config 00:01:28.130 crypto/nitrox: not in enabled drivers build config 00:01:28.130 crypto/null: not in enabled drivers build config 00:01:28.130 crypto/octeontx: not in enabled drivers build config 00:01:28.130 crypto/openssl: not in enabled drivers build config 00:01:28.130 crypto/scheduler: not in enabled drivers build config 00:01:28.130 crypto/uadk: not in enabled drivers build config 00:01:28.130 crypto/virtio: not in enabled drivers build config 00:01:28.130 compress/isal: not in enabled drivers build config 00:01:28.130 compress/mlx5: not in enabled drivers build config 00:01:28.130 compress/nitrox: not in enabled drivers build config 00:01:28.130 compress/octeontx: not in enabled drivers build config 00:01:28.130 compress/zlib: not in enabled drivers build config 00:01:28.130 regex/*: missing internal dependency, "regexdev" 00:01:28.130 ml/*: missing internal dependency, "mldev" 00:01:28.130 vdpa/ifc: not in enabled drivers build config 00:01:28.130 vdpa/mlx5: not in enabled drivers build config 00:01:28.130 vdpa/nfp: not in enabled drivers build config 00:01:28.130 vdpa/sfc: not in enabled drivers build config 00:01:28.130 event/*: missing internal dependency, "eventdev" 00:01:28.130 baseband/*: missing internal dependency, "bbdev" 00:01:28.130 gpu/*: missing internal dependency, "gpudev" 00:01:28.130 00:01:28.130 00:01:28.697 Build targets in project: 88 00:01:28.697 00:01:28.697 DPDK 24.03.0 00:01:28.697 00:01:28.697 User defined options 00:01:28.697 buildtype : debug 00:01:28.697 default_library : shared 00:01:28.697 libdir : lib 00:01:28.697 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:28.697 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:28.697 c_link_args : 00:01:28.697 cpu_instruction_set: native 00:01:28.697 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:28.697 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:28.697 enable_docs : false 00:01:28.697 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:28.697 enable_kmods : false 00:01:28.697 tests : false 00:01:28.697 00:01:28.697 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.963 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:28.963 [1/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:28.963 [2/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:28.963 [3/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:28.963 [4/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:28.963 [5/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:28.963 [6/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:28.963 [7/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:28.963 [8/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:28.963 [9/274] Linking static target lib/librte_kvargs.a 00:01:28.963 [10/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.225 [11/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.225 [12/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.225 [13/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:29.225 [14/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:29.225 [15/274] Linking static target lib/librte_log.a 00:01:29.225 [16/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.816 [17/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.816 [18/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:29.816 [19/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:29.816 [20/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:29.816 [21/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:29.816 [22/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:29.816 [23/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:29.816 [24/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:29.816 [25/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.080 [26/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.080 [27/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.080 [28/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.080 [29/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.080 [30/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.080 [31/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.080 [32/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.080 [33/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.080 [34/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.080 [35/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.080 [36/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.080 [37/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.080 [38/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.080 [39/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.080 [40/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.080 [41/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.080 [42/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.080 [43/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.080 [44/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.080 [45/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.080 [46/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.081 [47/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:30.081 [48/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:30.081 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:30.081 [50/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.081 [51/274] Linking static target lib/librte_telemetry.a 00:01:30.081 [52/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.081 [53/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:30.081 [54/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.081 [55/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.081 [56/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.081 [57/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.081 [58/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.081 [59/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.346 [60/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.346 [61/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.346 [62/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:30.346 [63/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.346 [64/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:30.346 [65/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:30.346 [66/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.346 [67/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.346 [68/274] Linking static target lib/librte_pci.a 00:01:30.346 [69/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:30.614 [70/274] Linking target lib/librte_log.so.24.1 00:01:30.614 [71/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.614 [72/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:30.614 [73/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:30.880 [74/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:30.880 [75/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:30.880 [76/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.880 [77/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:30.880 [78/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:30.880 [79/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:30.880 [80/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:30.880 [81/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:30.880 [82/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:30.880 [83/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:30.880 [84/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:30.880 [85/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:30.880 [86/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:30.880 [87/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.143 [88/274] Linking static target lib/librte_ring.a 00:01:31.143 [89/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.143 [90/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.143 [91/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.143 [92/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.143 [93/274] Linking target lib/librte_kvargs.so.24.1 00:01:31.143 [94/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.143 [95/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.143 [96/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.143 [97/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.143 [98/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.143 [99/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.143 [100/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.143 [101/274] Linking static target lib/librte_meter.a 00:01:31.143 [102/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.143 [103/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.143 [104/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:31.143 [105/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.143 [106/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.143 [107/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.143 [108/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.143 [109/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.143 [110/274] Linking static target lib/librte_rcu.a 00:01:31.143 [111/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.143 [112/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.143 [113/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.143 [114/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.143 [115/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.143 [116/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.143 [117/274] Linking static target lib/librte_mempool.a 00:01:31.144 [118/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.144 [119/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.409 [120/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.409 [121/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.409 [122/274] Linking static target lib/librte_eal.a 00:01:31.409 [123/274] Linking target lib/librte_telemetry.so.24.1 00:01:31.409 [124/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.409 [125/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:31.409 [126/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.409 [127/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.409 [128/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.409 [129/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.409 [130/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.409 [131/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.409 [132/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:31.409 [133/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.674 [134/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.674 [135/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.674 [136/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.674 [137/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:31.674 [138/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.674 [139/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.674 [140/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.674 [141/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.674 [142/274] Linking static target lib/librte_net.a 00:01:31.674 [143/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.936 [144/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.936 [145/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:31.936 [146/274] Linking static target lib/librte_cmdline.a 00:01:31.936 [147/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.936 [148/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.196 [149/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.196 [150/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.196 [151/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.196 [152/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.196 [153/274] Linking static target lib/librte_timer.a 00:01:32.196 [154/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.196 [155/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:32.196 [156/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.196 [157/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.196 [158/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.196 [159/274] Linking static target lib/librte_dmadev.a 00:01:32.196 [160/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.196 [161/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.196 [162/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:32.196 [163/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:32.459 [164/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.459 [165/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:32.459 [166/274] Linking static target lib/librte_stack.a 00:01:32.459 [167/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.459 [168/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:32.459 [169/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.459 [170/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.459 [171/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.459 [172/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.459 [173/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.459 [174/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.459 [175/274] Linking static target lib/librte_power.a 00:01:32.459 [176/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.459 [177/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.459 [178/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.720 [179/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:32.720 [180/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.720 [181/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.720 [182/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:32.720 [183/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.720 [184/274] Linking static target lib/librte_hash.a 00:01:32.720 [185/274] Linking static target lib/librte_compressdev.a 00:01:32.720 [186/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.720 [187/274] Linking static target lib/librte_mbuf.a 00:01:32.720 [188/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:32.720 [189/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.720 [190/274] Linking static target lib/librte_reorder.a 00:01:32.720 [191/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.720 [192/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:32.720 [193/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.720 [194/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:32.720 [195/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:32.720 [196/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:32.980 [197/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.980 [198/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.980 [199/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.980 [200/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.980 [201/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.980 [202/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.980 [203/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:32.980 [204/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:32.980 [205/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.980 [206/274] Linking static target lib/librte_security.a 00:01:32.980 [207/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.980 [208/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.980 [209/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.240 [210/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.240 [211/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:33.240 [212/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.240 [213/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.240 [214/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.240 [215/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:33.240 [216/274] Linking static target drivers/librte_bus_pci.a 00:01:33.240 [217/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.240 [218/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.240 [219/274] Linking static target drivers/librte_bus_vdev.a 00:01:33.240 [220/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.240 [221/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.240 [222/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.240 [223/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.240 [224/274] Linking static target drivers/librte_mempool_ring.a 00:01:33.240 [225/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:33.240 [226/274] Linking static target lib/librte_ethdev.a 00:01:33.240 [227/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.240 [228/274] Linking static target lib/librte_cryptodev.a 00:01:33.504 [229/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.504 [230/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.504 [231/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.442 [232/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.823 [233/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.727 [234/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.727 [235/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.727 [236/274] Linking target lib/librte_eal.so.24.1 00:01:37.727 [237/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:37.727 [238/274] Linking target lib/librte_ring.so.24.1 00:01:37.727 [239/274] Linking target lib/librte_timer.so.24.1 00:01:37.727 [240/274] Linking target lib/librte_dmadev.so.24.1 00:01:37.728 [241/274] Linking target lib/librte_meter.so.24.1 00:01:37.728 [242/274] Linking target lib/librte_pci.so.24.1 00:01:37.728 [243/274] Linking target lib/librte_stack.so.24.1 00:01:37.728 [244/274] Linking target drivers/librte_bus_vdev.so.24.1 00:01:37.987 [245/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:37.987 [246/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:37.987 [247/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:37.987 [248/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:37.987 [249/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:37.987 [250/274] Linking target lib/librte_rcu.so.24.1 00:01:37.987 [251/274] Linking target lib/librte_mempool.so.24.1 00:01:37.987 [252/274] Linking target drivers/librte_bus_pci.so.24.1 00:01:37.987 [253/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:37.987 [254/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:38.246 [255/274] Linking target drivers/librte_mempool_ring.so.24.1 00:01:38.246 [256/274] Linking target lib/librte_mbuf.so.24.1 00:01:38.246 [257/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:38.246 [258/274] Linking target lib/librte_reorder.so.24.1 00:01:38.246 [259/274] Linking target lib/librte_net.so.24.1 00:01:38.246 [260/274] Linking target lib/librte_compressdev.so.24.1 00:01:38.246 [261/274] Linking target lib/librte_cryptodev.so.24.1 00:01:38.504 [262/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:38.504 [263/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:38.504 [264/274] Linking target lib/librte_security.so.24.1 00:01:38.504 [265/274] Linking target lib/librte_hash.so.24.1 00:01:38.504 [266/274] Linking target lib/librte_cmdline.so.24.1 00:01:38.504 [267/274] Linking target lib/librte_ethdev.so.24.1 00:01:38.504 [268/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:38.504 [269/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:38.762 [270/274] Linking target lib/librte_power.so.24.1 00:01:41.296 [271/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.296 [272/274] Linking static target lib/librte_vhost.a 00:01:42.237 [273/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.237 [274/274] Linking target lib/librte_vhost.so.24.1 00:01:42.237 INFO: autodetecting backend as ninja 00:01:42.237 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:43.173 CC lib/ut_mock/mock.o 00:01:43.173 CC lib/ut/ut.o 00:01:43.173 CC lib/log/log.o 00:01:43.173 CC lib/log/log_flags.o 00:01:43.173 CC lib/log/log_deprecated.o 00:01:43.432 LIB libspdk_log.a 00:01:43.432 LIB libspdk_ut.a 00:01:43.432 LIB libspdk_ut_mock.a 00:01:43.432 SO libspdk_ut.so.2.0 00:01:43.432 SO libspdk_log.so.7.0 00:01:43.432 SO libspdk_ut_mock.so.6.0 00:01:43.432 SYMLINK libspdk_ut.so 00:01:43.432 SYMLINK libspdk_ut_mock.so 00:01:43.432 SYMLINK libspdk_log.so 00:01:43.691 CC lib/dma/dma.o 00:01:43.691 CC lib/ioat/ioat.o 00:01:43.691 CXX lib/trace_parser/trace.o 00:01:43.691 CC lib/util/base64.o 00:01:43.691 CC lib/util/bit_array.o 00:01:43.691 CC lib/util/cpuset.o 00:01:43.691 CC lib/util/crc16.o 00:01:43.691 CC lib/util/crc32.o 00:01:43.691 CC lib/util/crc32c.o 00:01:43.691 CC lib/util/crc32_ieee.o 00:01:43.691 CC lib/util/crc64.o 00:01:43.691 CC lib/util/dif.o 00:01:43.691 CC lib/util/fd.o 00:01:43.691 CC lib/util/file.o 00:01:43.691 CC lib/util/hexlify.o 00:01:43.691 CC lib/util/iov.o 00:01:43.691 CC lib/util/math.o 00:01:43.691 CC lib/util/pipe.o 00:01:43.691 CC lib/util/strerror_tls.o 00:01:43.691 CC lib/util/string.o 00:01:43.691 CC lib/util/uuid.o 00:01:43.691 CC lib/util/fd_group.o 00:01:43.691 CC lib/util/xor.o 00:01:43.691 CC lib/util/zipf.o 00:01:43.691 CC lib/vfio_user/host/vfio_user_pci.o 00:01:43.691 CC lib/vfio_user/host/vfio_user.o 00:01:43.949 LIB libspdk_dma.a 00:01:43.949 SO libspdk_dma.so.4.0 00:01:43.949 SYMLINK libspdk_dma.so 00:01:43.949 LIB libspdk_ioat.a 00:01:43.949 SO libspdk_ioat.so.7.0 00:01:43.949 LIB libspdk_vfio_user.a 00:01:43.949 SYMLINK libspdk_ioat.so 00:01:43.949 SO libspdk_vfio_user.so.5.0 00:01:44.208 SYMLINK libspdk_vfio_user.so 00:01:44.208 LIB libspdk_util.a 00:01:44.208 SO libspdk_util.so.9.0 00:01:44.468 SYMLINK libspdk_util.so 00:01:44.468 CC lib/conf/conf.o 00:01:44.468 CC lib/rdma/common.o 00:01:44.468 CC lib/idxd/idxd.o 00:01:44.468 CC lib/json/json_parse.o 00:01:44.468 CC lib/vmd/vmd.o 00:01:44.468 CC lib/rdma/rdma_verbs.o 00:01:44.468 CC lib/idxd/idxd_user.o 00:01:44.468 CC lib/env_dpdk/env.o 00:01:44.468 CC lib/json/json_util.o 00:01:44.468 CC lib/vmd/led.o 00:01:44.468 CC lib/env_dpdk/memory.o 00:01:44.468 CC lib/json/json_write.o 00:01:44.468 CC lib/env_dpdk/pci.o 00:01:44.468 CC lib/env_dpdk/init.o 00:01:44.468 CC lib/env_dpdk/threads.o 00:01:44.468 CC lib/env_dpdk/pci_ioat.o 00:01:44.468 CC lib/env_dpdk/pci_virtio.o 00:01:44.468 CC lib/env_dpdk/pci_vmd.o 00:01:44.468 CC lib/env_dpdk/pci_idxd.o 00:01:44.468 CC lib/env_dpdk/pci_event.o 00:01:44.468 CC lib/env_dpdk/sigbus_handler.o 00:01:44.468 CC lib/env_dpdk/pci_dpdk.o 00:01:44.468 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:44.468 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:44.728 LIB libspdk_conf.a 00:01:44.728 SO libspdk_conf.so.6.0 00:01:44.987 LIB libspdk_rdma.a 00:01:44.987 LIB libspdk_json.a 00:01:44.987 SYMLINK libspdk_conf.so 00:01:44.987 SO libspdk_rdma.so.6.0 00:01:44.987 SO libspdk_json.so.6.0 00:01:44.987 SYMLINK libspdk_rdma.so 00:01:44.987 SYMLINK libspdk_json.so 00:01:44.987 CC lib/jsonrpc/jsonrpc_server.o 00:01:44.987 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:44.987 CC lib/jsonrpc/jsonrpc_client.o 00:01:44.987 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:44.987 LIB libspdk_idxd.a 00:01:45.246 SO libspdk_idxd.so.12.0 00:01:45.246 SYMLINK libspdk_idxd.so 00:01:45.247 LIB libspdk_vmd.a 00:01:45.247 SO libspdk_vmd.so.6.0 00:01:45.247 SYMLINK libspdk_vmd.so 00:01:45.505 LIB libspdk_jsonrpc.a 00:01:45.505 SO libspdk_jsonrpc.so.6.0 00:01:45.505 LIB libspdk_trace_parser.a 00:01:45.505 SO libspdk_trace_parser.so.5.0 00:01:45.505 SYMLINK libspdk_jsonrpc.so 00:01:45.505 SYMLINK libspdk_trace_parser.so 00:01:45.764 CC lib/rpc/rpc.o 00:01:45.764 LIB libspdk_rpc.a 00:01:45.764 SO libspdk_rpc.so.6.0 00:01:46.023 SYMLINK libspdk_rpc.so 00:01:46.023 CC lib/notify/notify.o 00:01:46.023 CC lib/trace/trace.o 00:01:46.023 CC lib/keyring/keyring.o 00:01:46.023 CC lib/keyring/keyring_rpc.o 00:01:46.023 CC lib/trace/trace_flags.o 00:01:46.023 CC lib/notify/notify_rpc.o 00:01:46.023 CC lib/trace/trace_rpc.o 00:01:46.290 LIB libspdk_notify.a 00:01:46.290 SO libspdk_notify.so.6.0 00:01:46.290 LIB libspdk_keyring.a 00:01:46.290 SYMLINK libspdk_notify.so 00:01:46.290 LIB libspdk_trace.a 00:01:46.290 SO libspdk_keyring.so.1.0 00:01:46.290 SO libspdk_trace.so.10.0 00:01:46.290 SYMLINK libspdk_keyring.so 00:01:46.551 SYMLINK libspdk_trace.so 00:01:46.551 LIB libspdk_env_dpdk.a 00:01:46.551 CC lib/sock/sock.o 00:01:46.551 CC lib/thread/thread.o 00:01:46.551 CC lib/sock/sock_rpc.o 00:01:46.551 CC lib/thread/iobuf.o 00:01:46.551 SO libspdk_env_dpdk.so.14.0 00:01:46.809 SYMLINK libspdk_env_dpdk.so 00:01:47.066 LIB libspdk_sock.a 00:01:47.066 SO libspdk_sock.so.9.0 00:01:47.066 SYMLINK libspdk_sock.so 00:01:47.325 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:47.325 CC lib/nvme/nvme_ctrlr.o 00:01:47.325 CC lib/nvme/nvme_fabric.o 00:01:47.325 CC lib/nvme/nvme_ns_cmd.o 00:01:47.325 CC lib/nvme/nvme_ns.o 00:01:47.325 CC lib/nvme/nvme_pcie_common.o 00:01:47.325 CC lib/nvme/nvme_pcie.o 00:01:47.325 CC lib/nvme/nvme_qpair.o 00:01:47.325 CC lib/nvme/nvme.o 00:01:47.325 CC lib/nvme/nvme_quirks.o 00:01:47.325 CC lib/nvme/nvme_transport.o 00:01:47.325 CC lib/nvme/nvme_discovery.o 00:01:47.325 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:47.325 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:47.325 CC lib/nvme/nvme_tcp.o 00:01:47.325 CC lib/nvme/nvme_opal.o 00:01:47.325 CC lib/nvme/nvme_io_msg.o 00:01:47.325 CC lib/nvme/nvme_poll_group.o 00:01:47.325 CC lib/nvme/nvme_zns.o 00:01:47.325 CC lib/nvme/nvme_stubs.o 00:01:47.325 CC lib/nvme/nvme_auth.o 00:01:47.325 CC lib/nvme/nvme_cuse.o 00:01:47.325 CC lib/nvme/nvme_rdma.o 00:01:47.325 CC lib/nvme/nvme_vfio_user.o 00:01:48.264 LIB libspdk_thread.a 00:01:48.264 SO libspdk_thread.so.10.0 00:01:48.523 SYMLINK libspdk_thread.so 00:01:48.524 CC lib/virtio/virtio.o 00:01:48.524 CC lib/init/json_config.o 00:01:48.524 CC lib/blob/blobstore.o 00:01:48.524 CC lib/virtio/virtio_vhost_user.o 00:01:48.524 CC lib/init/subsystem.o 00:01:48.524 CC lib/blob/request.o 00:01:48.524 CC lib/virtio/virtio_vfio_user.o 00:01:48.524 CC lib/init/subsystem_rpc.o 00:01:48.524 CC lib/blob/zeroes.o 00:01:48.524 CC lib/virtio/virtio_pci.o 00:01:48.524 CC lib/init/rpc.o 00:01:48.524 CC lib/accel/accel.o 00:01:48.524 CC lib/vfu_tgt/tgt_endpoint.o 00:01:48.524 CC lib/blob/blob_bs_dev.o 00:01:48.524 CC lib/accel/accel_rpc.o 00:01:48.524 CC lib/vfu_tgt/tgt_rpc.o 00:01:48.524 CC lib/accel/accel_sw.o 00:01:48.781 LIB libspdk_init.a 00:01:48.781 SO libspdk_init.so.5.0 00:01:49.039 LIB libspdk_virtio.a 00:01:49.039 LIB libspdk_vfu_tgt.a 00:01:49.039 SYMLINK libspdk_init.so 00:01:49.039 SO libspdk_vfu_tgt.so.3.0 00:01:49.040 SO libspdk_virtio.so.7.0 00:01:49.040 SYMLINK libspdk_vfu_tgt.so 00:01:49.040 SYMLINK libspdk_virtio.so 00:01:49.040 CC lib/event/app.o 00:01:49.040 CC lib/event/reactor.o 00:01:49.040 CC lib/event/log_rpc.o 00:01:49.040 CC lib/event/app_rpc.o 00:01:49.040 CC lib/event/scheduler_static.o 00:01:49.609 LIB libspdk_event.a 00:01:49.609 SO libspdk_event.so.13.0 00:01:49.609 SYMLINK libspdk_event.so 00:01:49.609 LIB libspdk_accel.a 00:01:49.609 SO libspdk_accel.so.15.0 00:01:49.609 LIB libspdk_nvme.a 00:01:49.609 SYMLINK libspdk_accel.so 00:01:49.874 SO libspdk_nvme.so.13.0 00:01:49.874 CC lib/bdev/bdev.o 00:01:49.875 CC lib/bdev/bdev_rpc.o 00:01:49.875 CC lib/bdev/bdev_zone.o 00:01:49.875 CC lib/bdev/part.o 00:01:49.875 CC lib/bdev/scsi_nvme.o 00:01:50.139 SYMLINK libspdk_nvme.so 00:01:51.518 LIB libspdk_blob.a 00:01:51.518 SO libspdk_blob.so.11.0 00:01:51.775 SYMLINK libspdk_blob.so 00:01:51.775 CC lib/lvol/lvol.o 00:01:51.775 CC lib/blobfs/blobfs.o 00:01:51.775 CC lib/blobfs/tree.o 00:01:52.340 LIB libspdk_bdev.a 00:01:52.599 SO libspdk_bdev.so.15.0 00:01:52.599 SYMLINK libspdk_bdev.so 00:01:52.599 LIB libspdk_blobfs.a 00:01:52.599 SO libspdk_blobfs.so.10.0 00:01:52.599 CC lib/nbd/nbd.o 00:01:52.599 CC lib/nbd/nbd_rpc.o 00:01:52.599 CC lib/ublk/ublk.o 00:01:52.599 CC lib/ublk/ublk_rpc.o 00:01:52.599 CC lib/scsi/dev.o 00:01:52.599 CC lib/nvmf/ctrlr.o 00:01:52.599 CC lib/scsi/lun.o 00:01:52.599 CC lib/nvmf/ctrlr_discovery.o 00:01:52.599 CC lib/ftl/ftl_core.o 00:01:52.599 CC lib/scsi/port.o 00:01:52.599 CC lib/nvmf/ctrlr_bdev.o 00:01:52.599 CC lib/scsi/scsi.o 00:01:52.599 CC lib/ftl/ftl_init.o 00:01:52.599 CC lib/scsi/scsi_bdev.o 00:01:52.599 CC lib/nvmf/subsystem.o 00:01:52.600 CC lib/nvmf/nvmf.o 00:01:52.600 CC lib/ftl/ftl_layout.o 00:01:52.600 CC lib/scsi/scsi_pr.o 00:01:52.864 CC lib/nvmf/nvmf_rpc.o 00:01:52.864 CC lib/nvmf/transport.o 00:01:52.864 CC lib/ftl/ftl_io.o 00:01:52.864 CC lib/ftl/ftl_debug.o 00:01:52.864 CC lib/scsi/scsi_rpc.o 00:01:52.864 CC lib/scsi/task.o 00:01:52.864 CC lib/nvmf/tcp.o 00:01:52.864 CC lib/nvmf/stubs.o 00:01:52.864 CC lib/ftl/ftl_sb.o 00:01:52.864 CC lib/nvmf/mdns_server.o 00:01:52.864 CC lib/ftl/ftl_l2p.o 00:01:52.864 CC lib/ftl/ftl_l2p_flat.o 00:01:52.864 CC lib/nvmf/vfio_user.o 00:01:52.864 CC lib/nvmf/rdma.o 00:01:52.864 CC lib/nvmf/auth.o 00:01:52.864 CC lib/ftl/ftl_nv_cache.o 00:01:52.864 CC lib/ftl/ftl_band.o 00:01:52.864 CC lib/ftl/ftl_band_ops.o 00:01:52.864 CC lib/ftl/ftl_writer.o 00:01:52.864 CC lib/ftl/ftl_rq.o 00:01:52.864 CC lib/ftl/ftl_reloc.o 00:01:52.864 CC lib/ftl/ftl_l2p_cache.o 00:01:52.864 CC lib/ftl/ftl_p2l.o 00:01:52.864 CC lib/ftl/mngt/ftl_mngt.o 00:01:52.864 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:52.864 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:52.864 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:52.864 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:52.864 LIB libspdk_lvol.a 00:01:52.864 SYMLINK libspdk_blobfs.so 00:01:52.864 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:52.864 SO libspdk_lvol.so.10.0 00:01:52.864 SYMLINK libspdk_lvol.so 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:53.128 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:53.128 CC lib/ftl/utils/ftl_conf.o 00:01:53.128 CC lib/ftl/utils/ftl_md.o 00:01:53.128 CC lib/ftl/utils/ftl_mempool.o 00:01:53.128 CC lib/ftl/utils/ftl_bitmap.o 00:01:53.128 CC lib/ftl/utils/ftl_property.o 00:01:53.128 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:53.391 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:53.391 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:53.391 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:53.391 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:53.391 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:53.391 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:53.391 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:53.391 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:53.391 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:53.391 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:53.391 CC lib/ftl/base/ftl_base_dev.o 00:01:53.391 CC lib/ftl/base/ftl_base_bdev.o 00:01:53.391 CC lib/ftl/ftl_trace.o 00:01:53.650 LIB libspdk_nbd.a 00:01:53.650 SO libspdk_nbd.so.7.0 00:01:53.650 SYMLINK libspdk_nbd.so 00:01:53.650 LIB libspdk_scsi.a 00:01:53.650 SO libspdk_scsi.so.9.0 00:01:53.909 SYMLINK libspdk_scsi.so 00:01:53.909 LIB libspdk_ublk.a 00:01:53.909 SO libspdk_ublk.so.3.0 00:01:53.909 SYMLINK libspdk_ublk.so 00:01:53.909 CC lib/vhost/vhost.o 00:01:53.909 CC lib/iscsi/conn.o 00:01:53.909 CC lib/vhost/vhost_rpc.o 00:01:53.909 CC lib/vhost/vhost_scsi.o 00:01:53.909 CC lib/iscsi/init_grp.o 00:01:53.909 CC lib/iscsi/iscsi.o 00:01:53.909 CC lib/iscsi/md5.o 00:01:53.909 CC lib/vhost/vhost_blk.o 00:01:53.909 CC lib/vhost/rte_vhost_user.o 00:01:53.909 CC lib/iscsi/param.o 00:01:53.909 CC lib/iscsi/portal_grp.o 00:01:53.909 CC lib/iscsi/tgt_node.o 00:01:53.909 CC lib/iscsi/iscsi_subsystem.o 00:01:53.909 CC lib/iscsi/iscsi_rpc.o 00:01:53.909 CC lib/iscsi/task.o 00:01:54.167 LIB libspdk_ftl.a 00:01:54.426 SO libspdk_ftl.so.9.0 00:01:54.684 SYMLINK libspdk_ftl.so 00:01:55.250 LIB libspdk_vhost.a 00:01:55.250 SO libspdk_vhost.so.8.0 00:01:55.250 SYMLINK libspdk_vhost.so 00:01:55.250 LIB libspdk_nvmf.a 00:01:55.510 LIB libspdk_iscsi.a 00:01:55.510 SO libspdk_nvmf.so.18.0 00:01:55.510 SO libspdk_iscsi.so.8.0 00:01:55.510 SYMLINK libspdk_iscsi.so 00:01:55.769 SYMLINK libspdk_nvmf.so 00:01:56.027 CC module/vfu_device/vfu_virtio.o 00:01:56.027 CC module/env_dpdk/env_dpdk_rpc.o 00:01:56.027 CC module/vfu_device/vfu_virtio_blk.o 00:01:56.027 CC module/vfu_device/vfu_virtio_scsi.o 00:01:56.027 CC module/vfu_device/vfu_virtio_rpc.o 00:01:56.027 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:56.027 CC module/blob/bdev/blob_bdev.o 00:01:56.027 CC module/accel/ioat/accel_ioat.o 00:01:56.027 CC module/keyring/file/keyring.o 00:01:56.027 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:56.027 CC module/accel/ioat/accel_ioat_rpc.o 00:01:56.027 CC module/accel/dsa/accel_dsa.o 00:01:56.027 CC module/keyring/file/keyring_rpc.o 00:01:56.027 CC module/accel/error/accel_error.o 00:01:56.027 CC module/accel/error/accel_error_rpc.o 00:01:56.027 CC module/accel/dsa/accel_dsa_rpc.o 00:01:56.027 CC module/scheduler/gscheduler/gscheduler.o 00:01:56.027 CC module/sock/posix/posix.o 00:01:56.027 CC module/accel/iaa/accel_iaa.o 00:01:56.027 CC module/accel/iaa/accel_iaa_rpc.o 00:01:56.027 LIB libspdk_env_dpdk_rpc.a 00:01:56.027 SO libspdk_env_dpdk_rpc.so.6.0 00:01:56.027 SYMLINK libspdk_env_dpdk_rpc.so 00:01:56.284 LIB libspdk_keyring_file.a 00:01:56.284 LIB libspdk_scheduler_gscheduler.a 00:01:56.284 LIB libspdk_scheduler_dpdk_governor.a 00:01:56.284 SO libspdk_scheduler_gscheduler.so.4.0 00:01:56.285 SO libspdk_keyring_file.so.1.0 00:01:56.285 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:56.285 LIB libspdk_accel_error.a 00:01:56.285 LIB libspdk_accel_ioat.a 00:01:56.285 LIB libspdk_scheduler_dynamic.a 00:01:56.285 LIB libspdk_accel_iaa.a 00:01:56.285 SO libspdk_accel_error.so.2.0 00:01:56.285 SO libspdk_scheduler_dynamic.so.4.0 00:01:56.285 SO libspdk_accel_ioat.so.6.0 00:01:56.285 SYMLINK libspdk_scheduler_gscheduler.so 00:01:56.285 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:56.285 SYMLINK libspdk_keyring_file.so 00:01:56.285 SO libspdk_accel_iaa.so.3.0 00:01:56.285 LIB libspdk_accel_dsa.a 00:01:56.285 SYMLINK libspdk_scheduler_dynamic.so 00:01:56.285 LIB libspdk_blob_bdev.a 00:01:56.285 SYMLINK libspdk_accel_error.so 00:01:56.285 SYMLINK libspdk_accel_ioat.so 00:01:56.285 SO libspdk_accel_dsa.so.5.0 00:01:56.285 SO libspdk_blob_bdev.so.11.0 00:01:56.285 SYMLINK libspdk_accel_iaa.so 00:01:56.285 SYMLINK libspdk_blob_bdev.so 00:01:56.285 SYMLINK libspdk_accel_dsa.so 00:01:56.544 LIB libspdk_vfu_device.a 00:01:56.544 SO libspdk_vfu_device.so.3.0 00:01:56.544 CC module/bdev/aio/bdev_aio.o 00:01:56.544 CC module/blobfs/bdev/blobfs_bdev.o 00:01:56.544 CC module/bdev/aio/bdev_aio_rpc.o 00:01:56.544 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:56.544 CC module/bdev/ftl/bdev_ftl.o 00:01:56.544 CC module/bdev/malloc/bdev_malloc.o 00:01:56.544 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:56.544 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:56.544 CC module/bdev/null/bdev_null.o 00:01:56.544 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:56.544 CC module/bdev/iscsi/bdev_iscsi.o 00:01:56.544 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:56.544 CC module/bdev/null/bdev_null_rpc.o 00:01:56.544 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:56.544 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:56.544 CC module/bdev/gpt/gpt.o 00:01:56.544 CC module/bdev/gpt/vbdev_gpt.o 00:01:56.544 CC module/bdev/delay/vbdev_delay.o 00:01:56.544 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:56.544 CC module/bdev/split/vbdev_split.o 00:01:56.544 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:56.544 CC module/bdev/passthru/vbdev_passthru.o 00:01:56.544 CC module/bdev/split/vbdev_split_rpc.o 00:01:56.544 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:56.544 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:56.544 CC module/bdev/raid/bdev_raid.o 00:01:56.544 CC module/bdev/nvme/bdev_nvme.o 00:01:56.544 CC module/bdev/raid/bdev_raid_rpc.o 00:01:56.544 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:56.544 CC module/bdev/error/vbdev_error.o 00:01:56.544 CC module/bdev/raid/bdev_raid_sb.o 00:01:56.544 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:56.544 CC module/bdev/nvme/nvme_rpc.o 00:01:56.544 CC module/bdev/error/vbdev_error_rpc.o 00:01:56.544 CC module/bdev/raid/raid0.o 00:01:56.544 CC module/bdev/nvme/bdev_mdns_client.o 00:01:56.544 CC module/bdev/nvme/vbdev_opal.o 00:01:56.544 CC module/bdev/lvol/vbdev_lvol.o 00:01:56.544 CC module/bdev/raid/raid1.o 00:01:56.544 CC module/bdev/raid/concat.o 00:01:56.544 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:56.544 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:56.803 SYMLINK libspdk_vfu_device.so 00:01:56.803 LIB libspdk_sock_posix.a 00:01:57.062 SO libspdk_sock_posix.so.6.0 00:01:57.062 LIB libspdk_blobfs_bdev.a 00:01:57.062 LIB libspdk_bdev_split.a 00:01:57.062 SO libspdk_blobfs_bdev.so.6.0 00:01:57.062 SO libspdk_bdev_split.so.6.0 00:01:57.062 SYMLINK libspdk_sock_posix.so 00:01:57.062 LIB libspdk_bdev_iscsi.a 00:01:57.062 SYMLINK libspdk_blobfs_bdev.so 00:01:57.062 SYMLINK libspdk_bdev_split.so 00:01:57.062 LIB libspdk_bdev_null.a 00:01:57.062 SO libspdk_bdev_iscsi.so.6.0 00:01:57.062 LIB libspdk_bdev_error.a 00:01:57.062 LIB libspdk_bdev_gpt.a 00:01:57.062 LIB libspdk_bdev_ftl.a 00:01:57.062 SO libspdk_bdev_null.so.6.0 00:01:57.062 LIB libspdk_bdev_aio.a 00:01:57.062 SO libspdk_bdev_error.so.6.0 00:01:57.062 SO libspdk_bdev_gpt.so.6.0 00:01:57.062 SO libspdk_bdev_ftl.so.6.0 00:01:57.062 SYMLINK libspdk_bdev_iscsi.so 00:01:57.062 SO libspdk_bdev_aio.so.6.0 00:01:57.062 LIB libspdk_bdev_passthru.a 00:01:57.320 LIB libspdk_bdev_delay.a 00:01:57.321 SYMLINK libspdk_bdev_null.so 00:01:57.321 SO libspdk_bdev_passthru.so.6.0 00:01:57.321 SYMLINK libspdk_bdev_error.so 00:01:57.321 SYMLINK libspdk_bdev_gpt.so 00:01:57.321 LIB libspdk_bdev_zone_block.a 00:01:57.321 SYMLINK libspdk_bdev_ftl.so 00:01:57.321 SO libspdk_bdev_delay.so.6.0 00:01:57.321 SYMLINK libspdk_bdev_aio.so 00:01:57.321 LIB libspdk_bdev_malloc.a 00:01:57.321 SO libspdk_bdev_zone_block.so.6.0 00:01:57.321 SYMLINK libspdk_bdev_passthru.so 00:01:57.321 SO libspdk_bdev_malloc.so.6.0 00:01:57.321 SYMLINK libspdk_bdev_delay.so 00:01:57.321 SYMLINK libspdk_bdev_zone_block.so 00:01:57.321 SYMLINK libspdk_bdev_malloc.so 00:01:57.321 LIB libspdk_bdev_lvol.a 00:01:57.321 SO libspdk_bdev_lvol.so.6.0 00:01:57.321 LIB libspdk_bdev_virtio.a 00:01:57.321 SO libspdk_bdev_virtio.so.6.0 00:01:57.321 SYMLINK libspdk_bdev_lvol.so 00:01:57.579 SYMLINK libspdk_bdev_virtio.so 00:01:57.840 LIB libspdk_bdev_raid.a 00:01:57.840 SO libspdk_bdev_raid.so.6.0 00:01:57.840 SYMLINK libspdk_bdev_raid.so 00:01:59.219 LIB libspdk_bdev_nvme.a 00:01:59.219 SO libspdk_bdev_nvme.so.7.0 00:01:59.219 SYMLINK libspdk_bdev_nvme.so 00:01:59.477 CC module/event/subsystems/sock/sock.o 00:01:59.477 CC module/event/subsystems/scheduler/scheduler.o 00:01:59.477 CC module/event/subsystems/keyring/keyring.o 00:01:59.477 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:59.477 CC module/event/subsystems/vmd/vmd.o 00:01:59.477 CC module/event/subsystems/iobuf/iobuf.o 00:01:59.477 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:59.477 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:59.477 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:59.736 LIB libspdk_event_keyring.a 00:01:59.736 LIB libspdk_event_vhost_blk.a 00:01:59.736 LIB libspdk_event_sock.a 00:01:59.736 LIB libspdk_event_vfu_tgt.a 00:01:59.736 LIB libspdk_event_scheduler.a 00:01:59.736 LIB libspdk_event_vmd.a 00:01:59.736 SO libspdk_event_keyring.so.1.0 00:01:59.736 SO libspdk_event_vhost_blk.so.3.0 00:01:59.736 SO libspdk_event_vfu_tgt.so.3.0 00:01:59.736 LIB libspdk_event_iobuf.a 00:01:59.736 SO libspdk_event_sock.so.5.0 00:01:59.736 SO libspdk_event_scheduler.so.4.0 00:01:59.736 SO libspdk_event_vmd.so.6.0 00:01:59.736 SO libspdk_event_iobuf.so.3.0 00:01:59.736 SYMLINK libspdk_event_keyring.so 00:01:59.736 SYMLINK libspdk_event_vhost_blk.so 00:01:59.736 SYMLINK libspdk_event_vfu_tgt.so 00:01:59.736 SYMLINK libspdk_event_sock.so 00:01:59.736 SYMLINK libspdk_event_scheduler.so 00:01:59.736 SYMLINK libspdk_event_vmd.so 00:01:59.736 SYMLINK libspdk_event_iobuf.so 00:01:59.995 CC module/event/subsystems/accel/accel.o 00:01:59.995 LIB libspdk_event_accel.a 00:01:59.995 SO libspdk_event_accel.so.6.0 00:01:59.995 SYMLINK libspdk_event_accel.so 00:02:00.255 CC module/event/subsystems/bdev/bdev.o 00:02:00.518 LIB libspdk_event_bdev.a 00:02:00.518 SO libspdk_event_bdev.so.6.0 00:02:00.518 SYMLINK libspdk_event_bdev.so 00:02:00.778 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.778 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.778 CC module/event/subsystems/ublk/ublk.o 00:02:00.778 CC module/event/subsystems/nbd/nbd.o 00:02:00.778 CC module/event/subsystems/scsi/scsi.o 00:02:00.778 LIB libspdk_event_ublk.a 00:02:00.778 LIB libspdk_event_nbd.a 00:02:00.778 LIB libspdk_event_scsi.a 00:02:00.778 SO libspdk_event_nbd.so.6.0 00:02:00.778 SO libspdk_event_ublk.so.3.0 00:02:00.778 SO libspdk_event_scsi.so.6.0 00:02:01.037 SYMLINK libspdk_event_nbd.so 00:02:01.037 SYMLINK libspdk_event_ublk.so 00:02:01.037 SYMLINK libspdk_event_scsi.so 00:02:01.037 LIB libspdk_event_nvmf.a 00:02:01.037 SO libspdk_event_nvmf.so.6.0 00:02:01.037 SYMLINK libspdk_event_nvmf.so 00:02:01.037 CC module/event/subsystems/iscsi/iscsi.o 00:02:01.037 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:01.295 LIB libspdk_event_vhost_scsi.a 00:02:01.295 LIB libspdk_event_iscsi.a 00:02:01.295 SO libspdk_event_vhost_scsi.so.3.0 00:02:01.295 SO libspdk_event_iscsi.so.6.0 00:02:01.295 SYMLINK libspdk_event_vhost_scsi.so 00:02:01.295 SYMLINK libspdk_event_iscsi.so 00:02:01.561 SO libspdk.so.6.0 00:02:01.561 SYMLINK libspdk.so 00:02:01.561 CC test/rpc_client/rpc_client_test.o 00:02:01.561 CXX app/trace/trace.o 00:02:01.561 TEST_HEADER include/spdk/accel.h 00:02:01.561 CC app/spdk_top/spdk_top.o 00:02:01.561 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.561 CC app/spdk_nvme_perf/perf.o 00:02:01.561 TEST_HEADER include/spdk/accel_module.h 00:02:01.821 TEST_HEADER include/spdk/assert.h 00:02:01.821 CC app/trace_record/trace_record.o 00:02:01.821 CC app/spdk_lspci/spdk_lspci.o 00:02:01.821 TEST_HEADER include/spdk/barrier.h 00:02:01.821 TEST_HEADER include/spdk/base64.h 00:02:01.821 CC app/spdk_nvme_identify/identify.o 00:02:01.821 TEST_HEADER include/spdk/bdev.h 00:02:01.821 TEST_HEADER include/spdk/bdev_module.h 00:02:01.821 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.821 TEST_HEADER include/spdk/bit_array.h 00:02:01.821 TEST_HEADER include/spdk/bit_pool.h 00:02:01.821 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.821 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.821 TEST_HEADER include/spdk/blobfs.h 00:02:01.821 TEST_HEADER include/spdk/blob.h 00:02:01.821 TEST_HEADER include/spdk/conf.h 00:02:01.821 TEST_HEADER include/spdk/config.h 00:02:01.821 TEST_HEADER include/spdk/cpuset.h 00:02:01.821 TEST_HEADER include/spdk/crc16.h 00:02:01.821 TEST_HEADER include/spdk/crc32.h 00:02:01.821 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.821 TEST_HEADER include/spdk/crc64.h 00:02:01.821 CC app/spdk_dd/spdk_dd.o 00:02:01.821 TEST_HEADER include/spdk/dif.h 00:02:01.821 TEST_HEADER include/spdk/dma.h 00:02:01.821 TEST_HEADER include/spdk/endian.h 00:02:01.821 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.821 TEST_HEADER include/spdk/env.h 00:02:01.821 TEST_HEADER include/spdk/event.h 00:02:01.821 TEST_HEADER include/spdk/fd_group.h 00:02:01.821 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.821 TEST_HEADER include/spdk/fd.h 00:02:01.821 CC app/nvmf_tgt/nvmf_main.o 00:02:01.821 TEST_HEADER include/spdk/file.h 00:02:01.821 TEST_HEADER include/spdk/ftl.h 00:02:01.821 CC app/vhost/vhost.o 00:02:01.821 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.821 TEST_HEADER include/spdk/hexlify.h 00:02:01.821 TEST_HEADER include/spdk/histogram_data.h 00:02:01.821 TEST_HEADER include/spdk/idxd.h 00:02:01.821 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.821 TEST_HEADER include/spdk/init.h 00:02:01.822 TEST_HEADER include/spdk/ioat.h 00:02:01.822 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.822 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.822 TEST_HEADER include/spdk/json.h 00:02:01.822 CC examples/sock/hello_world/hello_sock.o 00:02:01.822 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.822 CC test/event/event_perf/event_perf.o 00:02:01.822 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.822 TEST_HEADER include/spdk/keyring.h 00:02:01.822 CC examples/accel/perf/accel_perf.o 00:02:01.822 CC app/spdk_tgt/spdk_tgt.o 00:02:01.822 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.822 TEST_HEADER include/spdk/keyring_module.h 00:02:01.822 CC test/env/memory/memory_ut.o 00:02:01.822 CC test/thread/poller_perf/poller_perf.o 00:02:01.822 TEST_HEADER include/spdk/likely.h 00:02:01.822 CC examples/nvme/hello_world/hello_world.o 00:02:01.822 TEST_HEADER include/spdk/log.h 00:02:01.822 CC test/env/vtophys/vtophys.o 00:02:01.822 CC test/nvme/reset/reset.o 00:02:01.822 TEST_HEADER include/spdk/lvol.h 00:02:01.822 CC examples/ioat/verify/verify.o 00:02:01.822 TEST_HEADER include/spdk/memory.h 00:02:01.822 TEST_HEADER include/spdk/mmio.h 00:02:01.822 CC test/event/reactor/reactor.o 00:02:01.822 CC examples/idxd/perf/perf.o 00:02:01.822 CC examples/ioat/perf/perf.o 00:02:01.822 TEST_HEADER include/spdk/nbd.h 00:02:01.822 CC test/app/histogram_perf/histogram_perf.o 00:02:01.822 CC test/nvme/aer/aer.o 00:02:01.822 TEST_HEADER include/spdk/notify.h 00:02:01.822 TEST_HEADER include/spdk/nvme.h 00:02:01.822 CC app/fio/nvme/fio_plugin.o 00:02:01.822 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.822 CC examples/nvme/reconnect/reconnect.o 00:02:01.822 CC examples/util/zipf/zipf.o 00:02:01.822 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.822 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.822 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.822 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.822 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.822 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.822 TEST_HEADER include/spdk/nvmf.h 00:02:01.822 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.822 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.822 CC test/bdev/bdevio/bdevio.o 00:02:01.822 CC test/dma/test_dma/test_dma.o 00:02:01.822 TEST_HEADER include/spdk/opal.h 00:02:01.822 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.822 TEST_HEADER include/spdk/opal_spec.h 00:02:01.822 CC examples/nvmf/nvmf/nvmf.o 00:02:01.822 CC examples/bdev/bdevperf/bdevperf.o 00:02:01.822 CC test/accel/dif/dif.o 00:02:01.822 TEST_HEADER include/spdk/pci_ids.h 00:02:01.822 TEST_HEADER include/spdk/pipe.h 00:02:01.822 TEST_HEADER include/spdk/queue.h 00:02:01.822 CC test/blobfs/mkfs/mkfs.o 00:02:01.822 CC examples/blob/hello_world/hello_blob.o 00:02:01.822 CC test/app/bdev_svc/bdev_svc.o 00:02:01.822 CC examples/blob/cli/blobcli.o 00:02:01.822 TEST_HEADER include/spdk/reduce.h 00:02:01.822 TEST_HEADER include/spdk/rpc.h 00:02:01.822 TEST_HEADER include/spdk/scheduler.h 00:02:01.822 TEST_HEADER include/spdk/scsi.h 00:02:01.822 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.822 CC examples/thread/thread/thread_ex.o 00:02:01.822 TEST_HEADER include/spdk/sock.h 00:02:01.822 TEST_HEADER include/spdk/stdinc.h 00:02:02.086 TEST_HEADER include/spdk/string.h 00:02:02.086 TEST_HEADER include/spdk/thread.h 00:02:02.086 TEST_HEADER include/spdk/trace.h 00:02:02.086 TEST_HEADER include/spdk/trace_parser.h 00:02:02.086 TEST_HEADER include/spdk/tree.h 00:02:02.086 TEST_HEADER include/spdk/ublk.h 00:02:02.086 TEST_HEADER include/spdk/util.h 00:02:02.086 TEST_HEADER include/spdk/uuid.h 00:02:02.086 TEST_HEADER include/spdk/version.h 00:02:02.086 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:02.086 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:02.086 LINK spdk_lspci 00:02:02.086 TEST_HEADER include/spdk/vhost.h 00:02:02.086 TEST_HEADER include/spdk/vmd.h 00:02:02.086 TEST_HEADER include/spdk/xor.h 00:02:02.086 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.086 TEST_HEADER include/spdk/zipf.h 00:02:02.086 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.086 CXX test/cpp_headers/accel.o 00:02:02.086 CC test/lvol/esnap/esnap.o 00:02:02.086 LINK rpc_client_test 00:02:02.086 LINK spdk_nvme_discover 00:02:02.086 LINK lsvmd 00:02:02.086 LINK reactor 00:02:02.086 LINK event_perf 00:02:02.086 LINK poller_perf 00:02:02.086 LINK vtophys 00:02:02.086 LINK interrupt_tgt 00:02:02.086 LINK env_dpdk_post_init 00:02:02.086 LINK nvmf_tgt 00:02:02.086 LINK vhost 00:02:02.086 LINK histogram_perf 00:02:02.086 LINK zipf 00:02:02.356 LINK spdk_trace_record 00:02:02.356 LINK iscsi_tgt 00:02:02.356 LINK spdk_tgt 00:02:02.356 LINK verify 00:02:02.356 LINK bdev_svc 00:02:02.356 LINK ioat_perf 00:02:02.356 LINK hello_world 00:02:02.356 LINK hello_sock 00:02:02.356 CXX test/cpp_headers/accel_module.o 00:02:02.356 LINK mkfs 00:02:02.356 LINK hello_bdev 00:02:02.356 LINK reset 00:02:02.356 LINK hello_blob 00:02:02.356 LINK aer 00:02:02.356 LINK thread 00:02:02.621 LINK nvmf 00:02:02.621 LINK spdk_dd 00:02:02.621 LINK idxd_perf 00:02:02.621 LINK reconnect 00:02:02.621 CC test/env/pci/pci_ut.o 00:02:02.621 CXX test/cpp_headers/assert.o 00:02:02.621 LINK test_dma 00:02:02.621 LINK bdevio 00:02:02.621 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:02.621 CC test/nvme/sgl/sgl.o 00:02:02.621 LINK spdk_trace 00:02:02.621 CC test/app/jsoncat/jsoncat.o 00:02:02.621 CC test/event/reactor_perf/reactor_perf.o 00:02:02.621 CC examples/vmd/led/led.o 00:02:02.621 CXX test/cpp_headers/barrier.o 00:02:02.621 CXX test/cpp_headers/base64.o 00:02:02.886 CC test/event/app_repeat/app_repeat.o 00:02:02.886 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.886 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.886 LINK dif 00:02:02.886 CXX test/cpp_headers/bdev.o 00:02:02.886 CC examples/nvme/arbitration/arbitration.o 00:02:02.886 CC app/fio/bdev/fio_plugin.o 00:02:02.886 LINK nvme_fuzz 00:02:02.886 CC test/event/scheduler/scheduler.o 00:02:02.886 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.886 CC test/app/stub/stub.o 00:02:02.886 CC test/nvme/e2edp/nvme_dp.o 00:02:02.886 CC test/nvme/overhead/overhead.o 00:02:02.886 LINK accel_perf 00:02:02.886 CC test/nvme/err_injection/err_injection.o 00:02:02.886 CC examples/nvme/hotplug/hotplug.o 00:02:02.886 CXX test/cpp_headers/bdev_module.o 00:02:02.886 LINK blobcli 00:02:02.886 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:02.886 CXX test/cpp_headers/bdev_zone.o 00:02:02.886 CXX test/cpp_headers/bit_array.o 00:02:03.161 LINK jsoncat 00:02:03.161 CC test/nvme/startup/startup.o 00:02:03.161 CC test/nvme/reserve/reserve.o 00:02:03.161 CC examples/nvme/abort/abort.o 00:02:03.161 LINK led 00:02:03.161 LINK reactor_perf 00:02:03.161 CXX test/cpp_headers/bit_pool.o 00:02:03.161 CXX test/cpp_headers/blob_bdev.o 00:02:03.161 CC test/nvme/simple_copy/simple_copy.o 00:02:03.161 LINK app_repeat 00:02:03.161 CXX test/cpp_headers/blobfs_bdev.o 00:02:03.161 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:03.161 CC test/nvme/connect_stress/connect_stress.o 00:02:03.161 CC test/nvme/boot_partition/boot_partition.o 00:02:03.161 LINK sgl 00:02:03.161 LINK mem_callbacks 00:02:03.161 CC test/nvme/compliance/nvme_compliance.o 00:02:03.161 LINK spdk_nvme 00:02:03.161 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:03.161 CXX test/cpp_headers/blobfs.o 00:02:03.161 LINK spdk_nvme_perf 00:02:03.161 CC test/nvme/fused_ordering/fused_ordering.o 00:02:03.161 CC test/nvme/fdp/fdp.o 00:02:03.161 CXX test/cpp_headers/blob.o 00:02:03.430 LINK stub 00:02:03.430 CXX test/cpp_headers/conf.o 00:02:03.430 LINK err_injection 00:02:03.430 CXX test/cpp_headers/config.o 00:02:03.430 CXX test/cpp_headers/cpuset.o 00:02:03.430 LINK scheduler 00:02:03.430 CXX test/cpp_headers/crc16.o 00:02:03.430 LINK pci_ut 00:02:03.430 LINK cmb_copy 00:02:03.430 CXX test/cpp_headers/crc32.o 00:02:03.430 CC test/nvme/cuse/cuse.o 00:02:03.430 CXX test/cpp_headers/crc64.o 00:02:03.430 CXX test/cpp_headers/dif.o 00:02:03.430 LINK hotplug 00:02:03.430 LINK startup 00:02:03.430 CXX test/cpp_headers/dma.o 00:02:03.430 CXX test/cpp_headers/endian.o 00:02:03.430 CXX test/cpp_headers/env_dpdk.o 00:02:03.430 CXX test/cpp_headers/env.o 00:02:03.430 LINK overhead 00:02:03.430 LINK spdk_nvme_identify 00:02:03.430 LINK arbitration 00:02:03.430 LINK nvme_dp 00:02:03.430 LINK bdevperf 00:02:03.430 CXX test/cpp_headers/event.o 00:02:03.430 CXX test/cpp_headers/fd_group.o 00:02:03.430 LINK reserve 00:02:03.430 LINK pmr_persistence 00:02:03.430 CXX test/cpp_headers/fd.o 00:02:03.430 LINK spdk_top 00:02:03.700 CXX test/cpp_headers/file.o 00:02:03.700 LINK connect_stress 00:02:03.700 LINK boot_partition 00:02:03.700 CXX test/cpp_headers/ftl.o 00:02:03.700 LINK simple_copy 00:02:03.700 LINK nvme_manage 00:02:03.700 CXX test/cpp_headers/gpt_spec.o 00:02:03.701 CXX test/cpp_headers/hexlify.o 00:02:03.701 CXX test/cpp_headers/histogram_data.o 00:02:03.701 CXX test/cpp_headers/idxd.o 00:02:03.701 CXX test/cpp_headers/idxd_spec.o 00:02:03.701 LINK vhost_fuzz 00:02:03.701 LINK doorbell_aers 00:02:03.701 CXX test/cpp_headers/init.o 00:02:03.701 CXX test/cpp_headers/ioat.o 00:02:03.701 LINK fused_ordering 00:02:03.701 CXX test/cpp_headers/ioat_spec.o 00:02:03.701 CXX test/cpp_headers/iscsi_spec.o 00:02:03.701 CXX test/cpp_headers/json.o 00:02:03.970 CXX test/cpp_headers/jsonrpc.o 00:02:03.970 CXX test/cpp_headers/keyring.o 00:02:03.970 CXX test/cpp_headers/keyring_module.o 00:02:03.970 CXX test/cpp_headers/likely.o 00:02:03.970 CXX test/cpp_headers/log.o 00:02:03.970 LINK spdk_bdev 00:02:03.970 CXX test/cpp_headers/lvol.o 00:02:03.970 LINK abort 00:02:03.970 CXX test/cpp_headers/memory.o 00:02:03.970 CXX test/cpp_headers/mmio.o 00:02:03.971 CXX test/cpp_headers/nbd.o 00:02:03.971 CXX test/cpp_headers/notify.o 00:02:03.971 CXX test/cpp_headers/nvme.o 00:02:03.971 CXX test/cpp_headers/nvme_intel.o 00:02:03.971 CXX test/cpp_headers/nvme_ocssd.o 00:02:03.971 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:03.971 LINK nvme_compliance 00:02:03.971 CXX test/cpp_headers/nvme_spec.o 00:02:03.971 CXX test/cpp_headers/nvme_zns.o 00:02:03.971 CXX test/cpp_headers/nvmf_cmd.o 00:02:03.971 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:03.971 CXX test/cpp_headers/nvmf.o 00:02:03.971 CXX test/cpp_headers/nvmf_spec.o 00:02:03.971 CXX test/cpp_headers/nvmf_transport.o 00:02:03.971 CXX test/cpp_headers/opal.o 00:02:03.971 CXX test/cpp_headers/opal_spec.o 00:02:03.971 CXX test/cpp_headers/pci_ids.o 00:02:03.971 LINK fdp 00:02:03.971 CXX test/cpp_headers/pipe.o 00:02:03.971 CXX test/cpp_headers/queue.o 00:02:03.971 CXX test/cpp_headers/reduce.o 00:02:03.971 LINK memory_ut 00:02:03.971 CXX test/cpp_headers/rpc.o 00:02:03.971 CXX test/cpp_headers/scheduler.o 00:02:03.971 CXX test/cpp_headers/scsi.o 00:02:03.971 CXX test/cpp_headers/scsi_spec.o 00:02:03.971 CXX test/cpp_headers/sock.o 00:02:03.971 CXX test/cpp_headers/stdinc.o 00:02:03.971 CXX test/cpp_headers/string.o 00:02:03.971 CXX test/cpp_headers/thread.o 00:02:04.235 CXX test/cpp_headers/trace.o 00:02:04.235 CXX test/cpp_headers/trace_parser.o 00:02:04.235 CXX test/cpp_headers/tree.o 00:02:04.235 CXX test/cpp_headers/ublk.o 00:02:04.235 CXX test/cpp_headers/util.o 00:02:04.235 CXX test/cpp_headers/uuid.o 00:02:04.235 CXX test/cpp_headers/version.o 00:02:04.235 CXX test/cpp_headers/vfio_user_pci.o 00:02:04.235 CXX test/cpp_headers/vfio_user_spec.o 00:02:04.235 CXX test/cpp_headers/vhost.o 00:02:04.235 CXX test/cpp_headers/vmd.o 00:02:04.235 CXX test/cpp_headers/xor.o 00:02:04.235 CXX test/cpp_headers/zipf.o 00:02:05.173 LINK cuse 00:02:05.430 LINK iscsi_fuzz 00:02:07.964 LINK esnap 00:02:08.533 00:02:08.533 real 0m48.893s 00:02:08.533 user 10m29.336s 00:02:08.533 sys 2m32.245s 00:02:08.533 20:00:55 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:08.533 20:00:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:08.533 ************************************ 00:02:08.533 END TEST make 00:02:08.533 ************************************ 00:02:08.533 20:00:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:08.533 20:00:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:08.533 20:00:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:08.533 20:00:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.533 20:00:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:08.533 20:00:55 -- pm/common@44 -- $ pid=4676 00:02:08.533 20:00:55 -- pm/common@50 -- $ kill -TERM 4676 00:02:08.533 20:00:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.533 20:00:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:08.533 20:00:55 -- pm/common@44 -- $ pid=4678 00:02:08.533 20:00:55 -- pm/common@50 -- $ kill -TERM 4678 00:02:08.533 20:00:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.533 20:00:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:08.533 20:00:55 -- pm/common@44 -- $ pid=4680 00:02:08.533 20:00:55 -- pm/common@50 -- $ kill -TERM 4680 00:02:08.533 20:00:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.533 20:00:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:08.533 20:00:55 -- pm/common@44 -- $ pid=4708 00:02:08.533 20:00:55 -- pm/common@50 -- $ sudo -E kill -TERM 4708 00:02:08.533 20:00:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:08.533 20:00:55 -- nvmf/common.sh@7 -- # uname -s 00:02:08.533 20:00:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:08.533 20:00:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:08.533 20:00:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:08.533 20:00:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:08.533 20:00:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:08.533 20:00:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:08.533 20:00:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:08.533 20:00:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:08.533 20:00:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:08.533 20:00:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:08.533 20:00:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:08.533 20:00:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:08.533 20:00:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:08.533 20:00:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:08.533 20:00:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:08.533 20:00:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:08.533 20:00:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:08.533 20:00:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:08.533 20:00:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.533 20:00:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.533 20:00:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.533 20:00:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.533 20:00:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.533 20:00:55 -- paths/export.sh@5 -- # export PATH 00:02:08.533 20:00:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.533 20:00:55 -- nvmf/common.sh@47 -- # : 0 00:02:08.533 20:00:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:08.533 20:00:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:08.533 20:00:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:08.533 20:00:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:08.533 20:00:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:08.533 20:00:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:08.533 20:00:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:08.533 20:00:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:08.533 20:00:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:08.534 20:00:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:08.534 20:00:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:08.534 20:00:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:08.534 20:00:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:08.534 20:00:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:08.534 20:00:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:08.534 20:00:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:08.534 20:00:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:08.534 20:00:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:08.534 20:00:55 -- spdk/autotest.sh@48 -- # udevadm_pid=61198 00:02:08.534 20:00:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:08.534 20:00:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:08.534 20:00:55 -- pm/common@17 -- # local monitor 00:02:08.534 20:00:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.534 20:00:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.534 20:00:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.534 20:00:55 -- pm/common@21 -- # date +%s 00:02:08.534 20:00:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.534 20:00:55 -- pm/common@21 -- # date +%s 00:02:08.534 20:00:55 -- pm/common@25 -- # sleep 1 00:02:08.534 20:00:55 -- pm/common@21 -- # date +%s 00:02:08.534 20:00:55 -- pm/common@21 -- # date +%s 00:02:08.534 20:00:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882455 00:02:08.534 20:00:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882455 00:02:08.534 20:00:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882455 00:02:08.534 20:00:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882455 00:02:08.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882455_collect-vmstat.pm.log 00:02:08.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882455_collect-cpu-load.pm.log 00:02:08.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882455_collect-cpu-temp.pm.log 00:02:08.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882455_collect-bmc-pm.bmc.pm.log 00:02:09.476 20:00:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:09.476 20:00:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:09.476 20:00:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:09.476 20:00:56 -- common/autotest_common.sh@10 -- # set +x 00:02:09.476 20:00:56 -- spdk/autotest.sh@59 -- # create_test_list 00:02:09.476 20:00:56 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:09.476 20:00:56 -- common/autotest_common.sh@10 -- # set +x 00:02:09.476 20:00:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:09.476 20:00:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.476 20:00:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.476 20:00:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:09.476 20:00:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.476 20:00:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:09.476 20:00:56 -- common/autotest_common.sh@1451 -- # uname 00:02:09.476 20:00:56 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:09.476 20:00:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:09.476 20:00:56 -- common/autotest_common.sh@1471 -- # uname 00:02:09.476 20:00:56 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:09.476 20:00:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:09.476 20:00:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:09.476 20:00:56 -- spdk/autotest.sh@72 -- # hash lcov 00:02:09.476 20:00:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:09.476 20:00:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:09.476 --rc lcov_branch_coverage=1 00:02:09.476 --rc lcov_function_coverage=1 00:02:09.476 --rc genhtml_branch_coverage=1 00:02:09.476 --rc genhtml_function_coverage=1 00:02:09.476 --rc genhtml_legend=1 00:02:09.476 --rc geninfo_all_blocks=1 00:02:09.476 ' 00:02:09.476 20:00:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:09.476 --rc lcov_branch_coverage=1 00:02:09.476 --rc lcov_function_coverage=1 00:02:09.476 --rc genhtml_branch_coverage=1 00:02:09.476 --rc genhtml_function_coverage=1 00:02:09.476 --rc genhtml_legend=1 00:02:09.476 --rc geninfo_all_blocks=1 00:02:09.476 ' 00:02:09.476 20:00:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:09.476 --rc lcov_branch_coverage=1 00:02:09.476 --rc lcov_function_coverage=1 00:02:09.476 --rc genhtml_branch_coverage=1 00:02:09.476 --rc genhtml_function_coverage=1 00:02:09.476 --rc genhtml_legend=1 00:02:09.476 --rc geninfo_all_blocks=1 00:02:09.476 --no-external' 00:02:09.476 20:00:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:09.476 --rc lcov_branch_coverage=1 00:02:09.476 --rc lcov_function_coverage=1 00:02:09.476 --rc genhtml_branch_coverage=1 00:02:09.476 --rc genhtml_function_coverage=1 00:02:09.476 --rc genhtml_legend=1 00:02:09.476 --rc geninfo_all_blocks=1 00:02:09.476 --no-external' 00:02:09.476 20:00:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:09.734 lcov: LCOV version 1.14 00:02:09.734 20:00:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:24.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:24.620 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:42.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:42.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:42.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:42.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:43.641 20:01:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:43.641 20:01:30 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:43.641 20:01:30 -- common/autotest_common.sh@10 -- # set +x 00:02:43.641 20:01:30 -- spdk/autotest.sh@91 -- # rm -f 00:02:43.641 20:01:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.580 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:44.580 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:44.580 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:44.580 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:44.580 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:44.580 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:44.580 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:44.580 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:44.580 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:02:44.580 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:44.580 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:44.580 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:44.580 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:44.580 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:44.580 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:44.580 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:44.580 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:44.839 20:01:31 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:44.839 20:01:31 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:44.839 20:01:31 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:44.839 20:01:31 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:44.839 20:01:31 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:44.839 20:01:31 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:44.839 20:01:31 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:44.839 20:01:31 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.839 20:01:31 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:44.839 20:01:31 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:44.839 20:01:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.839 20:01:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:44.839 20:01:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:44.839 20:01:31 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:44.839 20:01:31 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.839 No valid GPT data, bailing 00:02:44.839 20:01:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.839 20:01:31 -- scripts/common.sh@391 -- # pt= 00:02:44.839 20:01:31 -- scripts/common.sh@392 -- # return 1 00:02:44.839 20:01:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.839 1+0 records in 00:02:44.839 1+0 records out 00:02:44.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00201341 s, 521 MB/s 00:02:44.839 20:01:31 -- spdk/autotest.sh@118 -- # sync 00:02:44.839 20:01:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.839 20:01:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.839 20:01:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:46.742 20:01:33 -- spdk/autotest.sh@124 -- # uname -s 00:02:46.742 20:01:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:46.742 20:01:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.742 20:01:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:46.742 20:01:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:46.742 20:01:33 -- common/autotest_common.sh@10 -- # set +x 00:02:46.742 ************************************ 00:02:46.742 START TEST setup.sh 00:02:46.742 ************************************ 00:02:46.742 20:01:33 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.742 * Looking for test storage... 00:02:46.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.742 20:01:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:46.742 20:01:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:46.742 20:01:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.742 20:01:33 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:46.742 20:01:33 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:46.742 20:01:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:46.742 ************************************ 00:02:46.742 START TEST acl 00:02:46.742 ************************************ 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.742 * Looking for test storage... 00:02:46.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.742 20:01:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.742 20:01:33 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:46.742 20:01:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:46.742 20:01:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:46.742 20:01:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:46.742 20:01:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:46.742 20:01:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:46.742 20:01:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.742 20:01:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.122 20:01:35 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:48.122 20:01:35 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:48.122 20:01:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.122 20:01:35 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:48.122 20:01:35 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.122 20:01:35 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:49.060 Hugepages 00:02:49.060 node hugesize free / total 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.060 00:02:49.060 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.060 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.320 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:49.321 20:01:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:49.321 20:01:36 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:49.321 20:01:36 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:49.321 20:01:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.321 ************************************ 00:02:49.321 START TEST denied 00:02:49.321 ************************************ 00:02:49.321 20:01:36 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:49.321 20:01:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:02:49.321 20:01:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:49.321 20:01:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:02:49.321 20:01:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.321 20:01:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.698 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.698 20:01:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.247 00:02:53.247 real 0m3.538s 00:02:53.247 user 0m1.018s 00:02:53.247 sys 0m1.666s 00:02:53.247 20:01:39 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:53.247 20:01:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:53.247 ************************************ 00:02:53.247 END TEST denied 00:02:53.247 ************************************ 00:02:53.247 20:01:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:53.247 20:01:39 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:53.247 20:01:39 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:53.247 20:01:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.247 ************************************ 00:02:53.247 START TEST allowed 00:02:53.247 ************************************ 00:02:53.247 20:01:39 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:53.247 20:01:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:02:53.247 20:01:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:53.247 20:01:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:02:53.247 20:01:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.247 20:01:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.154 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.154 20:01:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:55.154 20:01:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:55.154 20:01:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:55.154 20:01:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.154 20:01:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.535 00:02:56.535 real 0m3.621s 00:02:56.535 user 0m0.929s 00:02:56.535 sys 0m1.642s 00:02:56.535 20:01:43 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:56.535 20:01:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:56.535 ************************************ 00:02:56.535 END TEST allowed 00:02:56.535 ************************************ 00:02:56.535 00:02:56.535 real 0m9.872s 00:02:56.535 user 0m3.012s 00:02:56.535 sys 0m5.028s 00:02:56.535 20:01:43 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:56.535 20:01:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:56.535 ************************************ 00:02:56.535 END TEST acl 00:02:56.535 ************************************ 00:02:56.535 20:01:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:56.535 20:01:43 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.535 20:01:43 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.535 20:01:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:56.535 ************************************ 00:02:56.535 START TEST hugepages 00:02:56.535 ************************************ 00:02:56.535 20:01:43 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:56.535 * Looking for test storage... 00:02:56.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.535 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:56.535 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:56.535 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:56.535 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:56.535 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 45037152 kB' 'MemAvailable: 48569748 kB' 'Buffers: 10496 kB' 'Cached: 8885124 kB' 'SwapCached: 0 kB' 'Active: 6215148 kB' 'Inactive: 3405072 kB' 'Active(anon): 5670124 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 727612 kB' 'Mapped: 144336 kB' 'Shmem: 4945524 kB' 'KReclaimable: 152716 kB' 'Slab: 428380 kB' 'SReclaimable: 152716 kB' 'SUnreclaim: 275664 kB' 'KernelStack: 12896 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562300 kB' 'Committed_AS: 7298620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192984 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.536 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.537 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:56.799 20:01:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:56.799 20:01:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.799 20:01:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.799 20:01:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.799 ************************************ 00:02:56.799 START TEST default_setup 00:02:56.799 ************************************ 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.799 20:01:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:57.744 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:57.744 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:57.744 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:57.744 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:57.744 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:57.744 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:58.010 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:58.010 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:58.010 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:58.010 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:58.957 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47134196 kB' 'MemAvailable: 50666512 kB' 'Buffers: 10496 kB' 'Cached: 8885216 kB' 'SwapCached: 0 kB' 'Active: 6237664 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692640 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750184 kB' 'Mapped: 144428 kB' 'Shmem: 4945616 kB' 'KReclaimable: 152156 kB' 'Slab: 427776 kB' 'SReclaimable: 152156 kB' 'SUnreclaim: 275620 kB' 'KernelStack: 12848 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7322840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193208 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.957 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.958 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47133828 kB' 'MemAvailable: 50666136 kB' 'Buffers: 10496 kB' 'Cached: 8885224 kB' 'SwapCached: 0 kB' 'Active: 6235900 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690876 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748784 kB' 'Mapped: 144520 kB' 'Shmem: 4945624 kB' 'KReclaimable: 152140 kB' 'Slab: 427568 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275428 kB' 'KernelStack: 12640 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7322860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193032 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.959 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47134112 kB' 'MemAvailable: 50666420 kB' 'Buffers: 10496 kB' 'Cached: 8885224 kB' 'SwapCached: 0 kB' 'Active: 6235412 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690388 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748284 kB' 'Mapped: 144512 kB' 'Shmem: 4945624 kB' 'KReclaimable: 152140 kB' 'Slab: 427560 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275420 kB' 'KernelStack: 12704 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7322880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193032 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.960 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.961 nr_hugepages=1024 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.961 resv_hugepages=0 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.961 surplus_hugepages=0 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.961 anon_hugepages=0 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.961 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47134112 kB' 'MemAvailable: 50666420 kB' 'Buffers: 10496 kB' 'Cached: 8885224 kB' 'SwapCached: 0 kB' 'Active: 6235616 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690592 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748444 kB' 'Mapped: 144436 kB' 'Shmem: 4945624 kB' 'KReclaimable: 152140 kB' 'Slab: 427540 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275400 kB' 'KernelStack: 12688 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7322900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193048 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.962 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28042644 kB' 'MemUsed: 4787240 kB' 'SwapCached: 0 kB' 'Active: 1780848 kB' 'Inactive: 161052 kB' 'Active(anon): 1619212 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1445628 kB' 'Mapped: 94592 kB' 'AnonPages: 499636 kB' 'Shmem: 1122940 kB' 'KernelStack: 7960 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73040 kB' 'Slab: 200056 kB' 'SReclaimable: 73040 kB' 'SUnreclaim: 127016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.963 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:58.964 node0=1024 expecting 1024 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:58.964 00:02:58.964 real 0m2.351s 00:02:58.964 user 0m0.552s 00:02:58.964 sys 0m0.837s 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:58.964 20:01:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:58.964 ************************************ 00:02:58.964 END TEST default_setup 00:02:58.964 ************************************ 00:02:59.224 20:01:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:59.224 20:01:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:59.224 20:01:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:59.224 20:01:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.224 ************************************ 00:02:59.224 START TEST per_node_1G_alloc 00:02:59.224 ************************************ 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.224 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.225 20:01:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.165 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:00.165 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:00.165 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:00.165 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:00.165 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:00.165 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:00.165 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:00.165 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.165 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:00.165 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.165 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:00.165 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:00.165 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:00.165 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:00.165 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:00.165 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:00.165 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47127176 kB' 'MemAvailable: 50659500 kB' 'Buffers: 10496 kB' 'Cached: 8882380 kB' 'SwapCached: 0 kB' 'Active: 6235268 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690244 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750872 kB' 'Mapped: 144444 kB' 'Shmem: 4942780 kB' 'KReclaimable: 152172 kB' 'Slab: 427604 kB' 'SReclaimable: 152172 kB' 'SUnreclaim: 275432 kB' 'KernelStack: 12704 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193192 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.431 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.432 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47127468 kB' 'MemAvailable: 50659808 kB' 'Buffers: 10496 kB' 'Cached: 8882384 kB' 'SwapCached: 0 kB' 'Active: 6235680 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690656 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751368 kB' 'Mapped: 144520 kB' 'Shmem: 4942784 kB' 'KReclaimable: 152204 kB' 'Slab: 427688 kB' 'SReclaimable: 152204 kB' 'SUnreclaim: 275484 kB' 'KernelStack: 12704 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193160 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.433 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.434 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47127840 kB' 'MemAvailable: 50660180 kB' 'Buffers: 10496 kB' 'Cached: 8882400 kB' 'SwapCached: 0 kB' 'Active: 6234984 kB' 'Inactive: 3405072 kB' 'Active(anon): 5689960 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750536 kB' 'Mapped: 144420 kB' 'Shmem: 4942800 kB' 'KReclaimable: 152204 kB' 'Slab: 427696 kB' 'SReclaimable: 152204 kB' 'SUnreclaim: 275492 kB' 'KernelStack: 12704 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193176 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.435 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.436 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:00.437 nr_hugepages=1024 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.437 resv_hugepages=0 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.437 surplus_hugepages=0 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.437 anon_hugepages=0 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47127840 kB' 'MemAvailable: 50660180 kB' 'Buffers: 10496 kB' 'Cached: 8882424 kB' 'SwapCached: 0 kB' 'Active: 6234984 kB' 'Inactive: 3405072 kB' 'Active(anon): 5689960 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750540 kB' 'Mapped: 144420 kB' 'Shmem: 4942824 kB' 'KReclaimable: 152204 kB' 'Slab: 427664 kB' 'SReclaimable: 152204 kB' 'SUnreclaim: 275460 kB' 'KernelStack: 12704 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193176 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.437 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.438 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 29073068 kB' 'MemUsed: 3756816 kB' 'SwapCached: 0 kB' 'Active: 1783384 kB' 'Inactive: 161052 kB' 'Active(anon): 1621748 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1445700 kB' 'Mapped: 94568 kB' 'AnonPages: 501984 kB' 'Shmem: 1123012 kB' 'KernelStack: 7960 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73056 kB' 'Slab: 200036 kB' 'SReclaimable: 73056 kB' 'SUnreclaim: 126980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.439 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.440 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711816 kB' 'MemFree: 18062584 kB' 'MemUsed: 9649232 kB' 'SwapCached: 0 kB' 'Active: 4451976 kB' 'Inactive: 3244020 kB' 'Active(anon): 4068588 kB' 'Inactive(anon): 0 kB' 'Active(file): 383388 kB' 'Inactive(file): 3244020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7447264 kB' 'Mapped: 49852 kB' 'AnonPages: 248868 kB' 'Shmem: 3819856 kB' 'KernelStack: 4760 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79148 kB' 'Slab: 227628 kB' 'SReclaimable: 79148 kB' 'SUnreclaim: 148480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.441 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:00.442 node0=512 expecting 512 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:00.442 node1=512 expecting 512 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:00.442 00:03:00.442 real 0m1.337s 00:03:00.442 user 0m0.565s 00:03:00.442 sys 0m0.730s 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.442 20:01:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:00.442 ************************************ 00:03:00.442 END TEST per_node_1G_alloc 00:03:00.442 ************************************ 00:03:00.442 20:01:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:00.442 20:01:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.442 20:01:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.442 20:01:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.442 ************************************ 00:03:00.442 START TEST even_2G_alloc 00:03:00.442 ************************************ 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.442 20:01:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.378 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.378 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.378 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.641 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.641 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.641 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.641 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.641 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.641 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.641 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.641 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.641 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.641 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.641 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.641 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.641 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.641 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47125308 kB' 'MemAvailable: 50657664 kB' 'Buffers: 10496 kB' 'Cached: 8882564 kB' 'SwapCached: 0 kB' 'Active: 6237408 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692384 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 752596 kB' 'Mapped: 144484 kB' 'Shmem: 4942964 kB' 'KReclaimable: 152236 kB' 'Slab: 427816 kB' 'SReclaimable: 152236 kB' 'SUnreclaim: 275580 kB' 'KernelStack: 12672 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193208 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.641 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.642 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47129848 kB' 'MemAvailable: 50662204 kB' 'Buffers: 10496 kB' 'Cached: 8882564 kB' 'SwapCached: 0 kB' 'Active: 6238024 kB' 'Inactive: 3405072 kB' 'Active(anon): 5693000 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 753224 kB' 'Mapped: 144484 kB' 'Shmem: 4942964 kB' 'KReclaimable: 152236 kB' 'Slab: 427800 kB' 'SReclaimable: 152236 kB' 'SUnreclaim: 275564 kB' 'KernelStack: 12672 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193160 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.643 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.644 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47130240 kB' 'MemAvailable: 50662596 kB' 'Buffers: 10496 kB' 'Cached: 8882584 kB' 'SwapCached: 0 kB' 'Active: 6237280 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692256 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 752464 kB' 'Mapped: 144436 kB' 'Shmem: 4942984 kB' 'KReclaimable: 152236 kB' 'Slab: 427804 kB' 'SReclaimable: 152236 kB' 'SUnreclaim: 275568 kB' 'KernelStack: 12688 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193160 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.645 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.646 nr_hugepages=1024 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.646 resv_hugepages=0 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.646 surplus_hugepages=0 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.646 anon_hugepages=0 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.646 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47130364 kB' 'MemAvailable: 50662720 kB' 'Buffers: 10496 kB' 'Cached: 8882608 kB' 'SwapCached: 0 kB' 'Active: 6237316 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692292 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 752504 kB' 'Mapped: 144436 kB' 'Shmem: 4943008 kB' 'KReclaimable: 152236 kB' 'Slab: 427856 kB' 'SReclaimable: 152236 kB' 'SUnreclaim: 275620 kB' 'KernelStack: 12720 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7320804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193176 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.647 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.648 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 29064364 kB' 'MemUsed: 3765520 kB' 'SwapCached: 0 kB' 'Active: 1786412 kB' 'Inactive: 161052 kB' 'Active(anon): 1624776 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1445780 kB' 'Mapped: 94568 kB' 'AnonPages: 504968 kB' 'Shmem: 1123092 kB' 'KernelStack: 7976 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73088 kB' 'Slab: 200196 kB' 'SReclaimable: 73088 kB' 'SUnreclaim: 127108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.911 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711816 kB' 'MemFree: 18064768 kB' 'MemUsed: 9647048 kB' 'SwapCached: 0 kB' 'Active: 4451392 kB' 'Inactive: 3244020 kB' 'Active(anon): 4068004 kB' 'Inactive(anon): 0 kB' 'Active(file): 383388 kB' 'Inactive(file): 3244020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7447344 kB' 'Mapped: 49868 kB' 'AnonPages: 248080 kB' 'Shmem: 3819936 kB' 'KernelStack: 4744 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79148 kB' 'Slab: 227660 kB' 'SReclaimable: 79148 kB' 'SUnreclaim: 148512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.912 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:01.913 node0=512 expecting 512 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:01.913 node1=512 expecting 512 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:01.913 00:03:01.913 real 0m1.301s 00:03:01.913 user 0m0.535s 00:03:01.913 sys 0m0.723s 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:01.913 20:01:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:01.913 ************************************ 00:03:01.913 END TEST even_2G_alloc 00:03:01.913 ************************************ 00:03:01.913 20:01:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:01.913 20:01:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.913 20:01:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.913 20:01:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.913 ************************************ 00:03:01.913 START TEST odd_alloc 00:03:01.913 ************************************ 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.913 20:01:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.855 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.855 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.855 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.855 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.855 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.855 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.855 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.855 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.855 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.855 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.855 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.855 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.855 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.855 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.855 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.855 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.855 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47133000 kB' 'MemAvailable: 50665284 kB' 'Buffers: 10496 kB' 'Cached: 8882696 kB' 'SwapCached: 0 kB' 'Active: 6235668 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690644 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750632 kB' 'Mapped: 143324 kB' 'Shmem: 4943096 kB' 'KReclaimable: 152092 kB' 'Slab: 427500 kB' 'SReclaimable: 152092 kB' 'SUnreclaim: 275408 kB' 'KernelStack: 12576 kB' 'PageTables: 7288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609852 kB' 'Committed_AS: 7294680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193160 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.124 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.125 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47136372 kB' 'MemAvailable: 50668656 kB' 'Buffers: 10496 kB' 'Cached: 8882696 kB' 'SwapCached: 0 kB' 'Active: 6235792 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690768 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751180 kB' 'Mapped: 143324 kB' 'Shmem: 4943096 kB' 'KReclaimable: 152092 kB' 'Slab: 427412 kB' 'SReclaimable: 152092 kB' 'SUnreclaim: 275320 kB' 'KernelStack: 12576 kB' 'PageTables: 7256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609852 kB' 'Committed_AS: 7294696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193048 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.126 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.127 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47135712 kB' 'MemAvailable: 50667996 kB' 'Buffers: 10496 kB' 'Cached: 8882708 kB' 'SwapCached: 0 kB' 'Active: 6234520 kB' 'Inactive: 3405072 kB' 'Active(anon): 5689496 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 749888 kB' 'Mapped: 143324 kB' 'Shmem: 4943108 kB' 'KReclaimable: 152092 kB' 'Slab: 427412 kB' 'SReclaimable: 152092 kB' 'SUnreclaim: 275320 kB' 'KernelStack: 12592 kB' 'PageTables: 7244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609852 kB' 'Committed_AS: 7294724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193032 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.128 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.129 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:03.130 nr_hugepages=1025 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.130 resv_hugepages=0 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.130 surplus_hugepages=0 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.130 anon_hugepages=0 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47134956 kB' 'MemAvailable: 50667240 kB' 'Buffers: 10496 kB' 'Cached: 8882712 kB' 'SwapCached: 0 kB' 'Active: 6234272 kB' 'Inactive: 3405072 kB' 'Active(anon): 5689248 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 749676 kB' 'Mapped: 143324 kB' 'Shmem: 4943112 kB' 'KReclaimable: 152092 kB' 'Slab: 427412 kB' 'SReclaimable: 152092 kB' 'SUnreclaim: 275320 kB' 'KernelStack: 12624 kB' 'PageTables: 7364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609852 kB' 'Committed_AS: 7295112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193048 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.130 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.131 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 29065556 kB' 'MemUsed: 3764328 kB' 'SwapCached: 0 kB' 'Active: 1787392 kB' 'Inactive: 161052 kB' 'Active(anon): 1625756 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1445848 kB' 'Mapped: 93448 kB' 'AnonPages: 505996 kB' 'Shmem: 1123160 kB' 'KernelStack: 7992 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72944 kB' 'Slab: 199904 kB' 'SReclaimable: 72944 kB' 'SUnreclaim: 126960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.132 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.133 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711816 kB' 'MemFree: 18070272 kB' 'MemUsed: 9641544 kB' 'SwapCached: 0 kB' 'Active: 4447232 kB' 'Inactive: 3244020 kB' 'Active(anon): 4063844 kB' 'Inactive(anon): 0 kB' 'Active(file): 383388 kB' 'Inactive(file): 3244020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7447360 kB' 'Mapped: 49880 kB' 'AnonPages: 243584 kB' 'Shmem: 3819952 kB' 'KernelStack: 4632 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79148 kB' 'Slab: 227508 kB' 'SReclaimable: 79148 kB' 'SUnreclaim: 148360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.134 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:03.135 node0=512 expecting 513 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:03.135 node1=513 expecting 512 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:03.135 00:03:03.135 real 0m1.291s 00:03:03.135 user 0m0.520s 00:03:03.135 sys 0m0.732s 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:03.135 20:01:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:03.135 ************************************ 00:03:03.135 END TEST odd_alloc 00:03:03.135 ************************************ 00:03:03.135 20:01:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:03.135 20:01:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.135 20:01:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.135 20:01:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.135 ************************************ 00:03:03.135 START TEST custom_alloc 00:03:03.135 ************************************ 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.135 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.136 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.075 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.075 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.076 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.076 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.338 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.338 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.338 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.338 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.338 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.338 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.338 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.338 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.338 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.338 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.338 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.338 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.338 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.338 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 46064352 kB' 'MemAvailable: 49596648 kB' 'Buffers: 10496 kB' 'Cached: 8882832 kB' 'SwapCached: 0 kB' 'Active: 6234724 kB' 'Inactive: 3405072 kB' 'Active(anon): 5689700 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 749716 kB' 'Mapped: 143392 kB' 'Shmem: 4943232 kB' 'KReclaimable: 152116 kB' 'Slab: 427404 kB' 'SReclaimable: 152116 kB' 'SUnreclaim: 275288 kB' 'KernelStack: 12656 kB' 'PageTables: 7392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086588 kB' 'Committed_AS: 7295312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193064 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.339 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 46067368 kB' 'MemAvailable: 49599664 kB' 'Buffers: 10496 kB' 'Cached: 8882832 kB' 'SwapCached: 0 kB' 'Active: 6235160 kB' 'Inactive: 3405072 kB' 'Active(anon): 5690136 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750152 kB' 'Mapped: 143392 kB' 'Shmem: 4943232 kB' 'KReclaimable: 152116 kB' 'Slab: 427408 kB' 'SReclaimable: 152116 kB' 'SUnreclaim: 275292 kB' 'KernelStack: 12672 kB' 'PageTables: 7392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086588 kB' 'Committed_AS: 7295332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193048 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.340 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.341 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 46067268 kB' 'MemAvailable: 49599564 kB' 'Buffers: 10496 kB' 'Cached: 8882852 kB' 'SwapCached: 0 kB' 'Active: 6236400 kB' 'Inactive: 3405072 kB' 'Active(anon): 5691376 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751396 kB' 'Mapped: 143800 kB' 'Shmem: 4943252 kB' 'KReclaimable: 152116 kB' 'Slab: 427376 kB' 'SReclaimable: 152116 kB' 'SUnreclaim: 275260 kB' 'KernelStack: 12624 kB' 'PageTables: 7232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086588 kB' 'Committed_AS: 7298292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193032 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.342 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.343 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:04.344 nr_hugepages=1536 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.344 resv_hugepages=0 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.344 surplus_hugepages=0 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.344 anon_hugepages=0 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 46067308 kB' 'MemAvailable: 49599604 kB' 'Buffers: 10496 kB' 'Cached: 8882872 kB' 'SwapCached: 0 kB' 'Active: 6239808 kB' 'Inactive: 3405072 kB' 'Active(anon): 5694784 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754848 kB' 'Mapped: 144204 kB' 'Shmem: 4943272 kB' 'KReclaimable: 152116 kB' 'Slab: 427368 kB' 'SReclaimable: 152116 kB' 'SUnreclaim: 275252 kB' 'KernelStack: 12608 kB' 'PageTables: 7216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086588 kB' 'Committed_AS: 7301492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193036 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.344 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.345 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.346 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 29046536 kB' 'MemUsed: 3783348 kB' 'SwapCached: 0 kB' 'Active: 1789420 kB' 'Inactive: 161052 kB' 'Active(anon): 1627784 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1445852 kB' 'Mapped: 93940 kB' 'AnonPages: 507824 kB' 'Shmem: 1123164 kB' 'KernelStack: 7992 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72936 kB' 'Slab: 199692 kB' 'SReclaimable: 72936 kB' 'SUnreclaim: 126756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.607 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711816 kB' 'MemFree: 17020016 kB' 'MemUsed: 10691800 kB' 'SwapCached: 0 kB' 'Active: 4446296 kB' 'Inactive: 3244020 kB' 'Active(anon): 4062908 kB' 'Inactive(anon): 0 kB' 'Active(file): 383388 kB' 'Inactive(file): 3244020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7447560 kB' 'Mapped: 49892 kB' 'AnonPages: 242896 kB' 'Shmem: 3820152 kB' 'KernelStack: 4632 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79180 kB' 'Slab: 227676 kB' 'SReclaimable: 79180 kB' 'SUnreclaim: 148496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.608 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:04.609 node0=512 expecting 512 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:04.609 node1=1024 expecting 1024 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:04.609 00:03:04.609 real 0m1.290s 00:03:04.609 user 0m0.529s 00:03:04.609 sys 0m0.718s 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.609 20:01:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:04.609 ************************************ 00:03:04.609 END TEST custom_alloc 00:03:04.609 ************************************ 00:03:04.609 20:01:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:04.609 20:01:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.609 20:01:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.609 20:01:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.609 ************************************ 00:03:04.609 START TEST no_shrink_alloc 00:03:04.609 ************************************ 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.609 20:01:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.543 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.543 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.543 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.543 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.543 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.543 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.543 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.543 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.543 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.543 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.543 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.543 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.543 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.543 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.543 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.543 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.543 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47116284 kB' 'MemAvailable: 50648592 kB' 'Buffers: 10496 kB' 'Cached: 8882956 kB' 'SwapCached: 0 kB' 'Active: 6237392 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692368 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 752252 kB' 'Mapped: 143324 kB' 'Shmem: 4943356 kB' 'KReclaimable: 152140 kB' 'Slab: 427572 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275432 kB' 'KernelStack: 13088 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193384 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47119904 kB' 'MemAvailable: 50652212 kB' 'Buffers: 10496 kB' 'Cached: 8882956 kB' 'SwapCached: 0 kB' 'Active: 6237156 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692132 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 752000 kB' 'Mapped: 143400 kB' 'Shmem: 4943356 kB' 'KReclaimable: 152140 kB' 'Slab: 427616 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275476 kB' 'KernelStack: 12720 kB' 'PageTables: 6992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193208 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.804 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47120528 kB' 'MemAvailable: 50652836 kB' 'Buffers: 10496 kB' 'Cached: 8882976 kB' 'SwapCached: 0 kB' 'Active: 6236940 kB' 'Inactive: 3405072 kB' 'Active(anon): 5691916 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751792 kB' 'Mapped: 143424 kB' 'Shmem: 4943376 kB' 'KReclaimable: 152140 kB' 'Slab: 427568 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275428 kB' 'KernelStack: 12656 kB' 'PageTables: 7264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193192 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.805 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.806 nr_hugepages=1024 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.806 resv_hugepages=0 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.806 surplus_hugepages=0 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.806 anon_hugepages=0 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47120900 kB' 'MemAvailable: 50653208 kB' 'Buffers: 10496 kB' 'Cached: 8882996 kB' 'SwapCached: 0 kB' 'Active: 6237100 kB' 'Inactive: 3405072 kB' 'Active(anon): 5692076 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751920 kB' 'Mapped: 143348 kB' 'Shmem: 4943396 kB' 'KReclaimable: 152140 kB' 'Slab: 427588 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275448 kB' 'KernelStack: 12656 kB' 'PageTables: 7256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193192 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.806 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28000208 kB' 'MemUsed: 4829676 kB' 'SwapCached: 0 kB' 'Active: 1790428 kB' 'Inactive: 161052 kB' 'Active(anon): 1628792 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1445928 kB' 'Mapped: 93444 kB' 'AnonPages: 508804 kB' 'Shmem: 1123240 kB' 'KernelStack: 8056 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72968 kB' 'Slab: 199904 kB' 'SReclaimable: 72968 kB' 'SUnreclaim: 126936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.807 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:05.808 node0=1024 expecting 1024 00:03:05.808 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:05.808 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:05.808 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:05.808 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:05.808 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.808 20:01:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.743 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:06.743 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.008 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.008 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.008 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.008 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.008 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.008 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.008 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.008 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.008 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.008 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.008 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.008 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.008 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.008 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.008 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.008 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47079040 kB' 'MemAvailable: 50611348 kB' 'Buffers: 10496 kB' 'Cached: 8883064 kB' 'SwapCached: 0 kB' 'Active: 6239412 kB' 'Inactive: 3405072 kB' 'Active(anon): 5694388 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754332 kB' 'Mapped: 143532 kB' 'Shmem: 4943464 kB' 'KReclaimable: 152140 kB' 'Slab: 427640 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275500 kB' 'KernelStack: 12704 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193224 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.008 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.009 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47081044 kB' 'MemAvailable: 50613352 kB' 'Buffers: 10496 kB' 'Cached: 8883068 kB' 'SwapCached: 0 kB' 'Active: 6239464 kB' 'Inactive: 3405072 kB' 'Active(anon): 5694440 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754352 kB' 'Mapped: 143532 kB' 'Shmem: 4943468 kB' 'KReclaimable: 152140 kB' 'Slab: 427612 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275472 kB' 'KernelStack: 12720 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193176 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.010 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.011 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47081352 kB' 'MemAvailable: 50613660 kB' 'Buffers: 10496 kB' 'Cached: 8883084 kB' 'SwapCached: 0 kB' 'Active: 6238724 kB' 'Inactive: 3405072 kB' 'Active(anon): 5693700 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 753544 kB' 'Mapped: 143360 kB' 'Shmem: 4943484 kB' 'KReclaimable: 152140 kB' 'Slab: 427644 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275504 kB' 'KernelStack: 12704 kB' 'PageTables: 7312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193176 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.012 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.013 nr_hugepages=1024 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.013 resv_hugepages=0 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.013 surplus_hugepages=0 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.013 anon_hugepages=0 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.013 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541700 kB' 'MemFree: 47081712 kB' 'MemAvailable: 50614020 kB' 'Buffers: 10496 kB' 'Cached: 8883108 kB' 'SwapCached: 0 kB' 'Active: 6238780 kB' 'Inactive: 3405072 kB' 'Active(anon): 5693756 kB' 'Inactive(anon): 0 kB' 'Active(file): 545024 kB' 'Inactive(file): 3405072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 753576 kB' 'Mapped: 143360 kB' 'Shmem: 4943508 kB' 'KReclaimable: 152140 kB' 'Slab: 427644 kB' 'SReclaimable: 152140 kB' 'SUnreclaim: 275504 kB' 'KernelStack: 12720 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610876 kB' 'Committed_AS: 7295736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193192 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 409180 kB' 'DirectMap2M: 8947712 kB' 'DirectMap1G: 59768832 kB' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.014 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27965312 kB' 'MemUsed: 4864572 kB' 'SwapCached: 0 kB' 'Active: 1792420 kB' 'Inactive: 161052 kB' 'Active(anon): 1630784 kB' 'Inactive(anon): 0 kB' 'Active(file): 161636 kB' 'Inactive(file): 161052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1446016 kB' 'Mapped: 93448 kB' 'AnonPages: 510744 kB' 'Shmem: 1123328 kB' 'KernelStack: 8088 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72968 kB' 'Slab: 199916 kB' 'SReclaimable: 72968 kB' 'SUnreclaim: 126948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.015 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.016 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.276 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.277 node0=1024 expecting 1024 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.277 00:03:07.277 real 0m2.598s 00:03:07.277 user 0m1.081s 00:03:07.277 sys 0m1.434s 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.277 20:01:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.277 ************************************ 00:03:07.277 END TEST no_shrink_alloc 00:03:07.277 ************************************ 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:07.277 20:01:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:07.277 00:03:07.277 real 0m10.573s 00:03:07.277 user 0m3.946s 00:03:07.277 sys 0m5.418s 00:03:07.277 20:01:54 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.277 20:01:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.277 ************************************ 00:03:07.277 END TEST hugepages 00:03:07.277 ************************************ 00:03:07.277 20:01:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:07.277 20:01:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:07.277 20:01:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.277 20:01:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.277 ************************************ 00:03:07.277 START TEST driver 00:03:07.277 ************************************ 00:03:07.277 20:01:54 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:07.277 * Looking for test storage... 00:03:07.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.277 20:01:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:07.277 20:01:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.277 20:01:54 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.823 20:01:56 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:09.823 20:01:56 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:09.823 20:01:56 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:09.823 20:01:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:09.823 ************************************ 00:03:09.824 START TEST guess_driver 00:03:09.824 ************************************ 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:09.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:09.824 Looking for driver=vfio-pci 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.824 20:01:56 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.781 20:01:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.729 20:01:58 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.266 00:03:14.266 real 0m4.619s 00:03:14.266 user 0m0.953s 00:03:14.266 sys 0m1.759s 00:03:14.266 20:02:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:14.266 20:02:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.266 ************************************ 00:03:14.266 END TEST guess_driver 00:03:14.266 ************************************ 00:03:14.266 00:03:14.266 real 0m6.915s 00:03:14.266 user 0m1.456s 00:03:14.266 sys 0m2.677s 00:03:14.266 20:02:01 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:14.266 20:02:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.266 ************************************ 00:03:14.266 END TEST driver 00:03:14.266 ************************************ 00:03:14.266 20:02:01 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:14.266 20:02:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:14.266 20:02:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:14.266 20:02:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.266 ************************************ 00:03:14.266 START TEST devices 00:03:14.266 ************************************ 00:03:14.266 20:02:01 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:14.266 * Looking for test storage... 00:03:14.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.266 20:02:01 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:14.266 20:02:01 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:14.266 20:02:01 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.266 20:02:01 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:15.648 20:02:02 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:15.648 No valid GPT data, bailing 00:03:15.648 20:02:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:15.648 20:02:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:15.648 20:02:02 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:15.648 20:02:02 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:15.648 20:02:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:15.648 ************************************ 00:03:15.648 START TEST nvme_mount 00:03:15.648 ************************************ 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:15.648 20:02:02 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:16.588 Creating new GPT entries in memory. 00:03:16.588 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:16.588 other utilities. 00:03:16.588 20:02:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:16.588 20:02:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:16.588 20:02:03 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:16.588 20:02:03 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:16.588 20:02:03 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:17.972 Creating new GPT entries in memory. 00:03:17.972 The operation has completed successfully. 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 81028 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.972 20:02:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.910 20:02:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:18.910 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.910 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:19.169 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:19.169 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:19.169 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:19.169 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:19.169 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:19.169 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:19.169 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.169 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:19.169 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.429 20:02:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.369 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.370 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.629 20:02:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:21.567 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:21.828 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:21.828 00:03:21.828 real 0m6.129s 00:03:21.828 user 0m1.409s 00:03:21.828 sys 0m2.294s 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:21.828 20:02:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:21.828 ************************************ 00:03:21.828 END TEST nvme_mount 00:03:21.828 ************************************ 00:03:21.828 20:02:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:21.828 20:02:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:21.828 20:02:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:21.828 20:02:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:21.828 ************************************ 00:03:21.828 START TEST dm_mount 00:03:21.828 ************************************ 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:21.828 20:02:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:22.763 Creating new GPT entries in memory. 00:03:22.763 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:22.763 other utilities. 00:03:22.763 20:02:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:22.763 20:02:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.763 20:02:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.763 20:02:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.763 20:02:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:24.146 Creating new GPT entries in memory. 00:03:24.146 The operation has completed successfully. 00:03:24.146 20:02:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:24.146 20:02:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.146 20:02:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:24.146 20:02:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:24.146 20:02:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:25.085 The operation has completed successfully. 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 83421 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:25.085 20:02:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.085 20:02:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:26.023 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.282 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:26.282 20:02:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:26.282 20:02:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.282 20:02:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:27.222 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:27.483 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:27.483 00:03:27.483 real 0m5.562s 00:03:27.483 user 0m0.960s 00:03:27.483 sys 0m1.461s 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.483 20:02:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:27.483 ************************************ 00:03:27.483 END TEST dm_mount 00:03:27.483 ************************************ 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.483 20:02:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.743 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:27.743 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:27.743 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:27.743 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.743 20:02:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:27.743 00:03:27.743 real 0m13.540s 00:03:27.743 user 0m2.991s 00:03:27.743 sys 0m4.738s 00:03:27.743 20:02:14 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.743 20:02:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.743 ************************************ 00:03:27.743 END TEST devices 00:03:27.743 ************************************ 00:03:27.743 00:03:27.743 real 0m41.157s 00:03:27.743 user 0m11.492s 00:03:27.743 sys 0m18.038s 00:03:27.743 20:02:14 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.743 20:02:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.743 ************************************ 00:03:27.743 END TEST setup.sh 00:03:27.743 ************************************ 00:03:27.743 20:02:14 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.120 Hugepages 00:03:29.120 node hugesize free / total 00:03:29.120 node0 1048576kB 0 / 0 00:03:29.120 node0 2048kB 2048 / 2048 00:03:29.120 node1 1048576kB 0 / 0 00:03:29.120 node1 2048kB 0 / 0 00:03:29.120 00:03:29.120 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.120 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:29.120 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:29.120 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:29.120 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:29.120 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:29.120 20:02:15 -- spdk/autotest.sh@130 -- # uname -s 00:03:29.120 20:02:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:29.120 20:02:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:29.120 20:02:15 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.062 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:30.062 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:30.062 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:31.005 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:31.266 20:02:18 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:32.208 20:02:19 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:32.208 20:02:19 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:32.208 20:02:19 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.208 20:02:19 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:32.208 20:02:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:32.208 20:02:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:32.208 20:02:19 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.208 20:02:19 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.208 20:02:19 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:32.208 20:02:19 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:32.208 20:02:19 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:03:32.208 20:02:19 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.588 Waiting for block devices as requested 00:03:33.588 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:33.588 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:33.588 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:33.588 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:33.588 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:33.849 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:33.849 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:33.849 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:33.849 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:34.110 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:34.110 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:34.110 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:34.110 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:34.380 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:34.380 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:34.380 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:34.380 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:34.639 20:02:21 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:34.639 20:02:21 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1498 -- # grep 0000:0b:00.0/nvme/nvme 00:03:34.639 20:02:21 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:34.639 20:02:21 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:34.639 20:02:21 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:34.639 20:02:21 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:34.639 20:02:21 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:34.639 20:02:21 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:34.639 20:02:21 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:34.639 20:02:21 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:34.639 20:02:21 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:34.639 20:02:21 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:34.639 20:02:21 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:34.639 20:02:21 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:34.639 20:02:21 -- common/autotest_common.sh@1553 -- # continue 00:03:34.639 20:02:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:34.639 20:02:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.639 20:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:34.639 20:02:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:34.639 20:02:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:34.639 20:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:34.639 20:02:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.017 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:36.017 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:36.017 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.996 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.996 20:02:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:36.996 20:02:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.996 20:02:23 -- common/autotest_common.sh@10 -- # set +x 00:03:36.997 20:02:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:36.997 20:02:23 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:36.997 20:02:23 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:36.997 20:02:23 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:36.997 20:02:23 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:36.997 20:02:23 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:36.997 20:02:23 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:36.997 20:02:23 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:36.997 20:02:23 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:36.997 20:02:23 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:36.997 20:02:23 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:36.997 20:02:24 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:36.997 20:02:24 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:03:36.997 20:02:24 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:36.997 20:02:24 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:36.997 20:02:24 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:36.997 20:02:24 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:36.997 20:02:24 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:36.997 20:02:24 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:0b:00.0 00:03:36.997 20:02:24 -- common/autotest_common.sh@1588 -- # [[ -z 0000:0b:00.0 ]] 00:03:36.997 20:02:24 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=88589 00:03:36.997 20:02:24 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.997 20:02:24 -- common/autotest_common.sh@1594 -- # waitforlisten 88589 00:03:36.997 20:02:24 -- common/autotest_common.sh@827 -- # '[' -z 88589 ']' 00:03:36.997 20:02:24 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.997 20:02:24 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:36.997 20:02:24 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.997 20:02:24 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:36.997 20:02:24 -- common/autotest_common.sh@10 -- # set +x 00:03:36.997 [2024-05-16 20:02:24.067386] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:03:36.997 [2024-05-16 20:02:24.067467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88589 ] 00:03:36.997 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.997 [2024-05-16 20:02:24.127070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.256 [2024-05-16 20:02:24.231249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.514 20:02:24 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:37.514 20:02:24 -- common/autotest_common.sh@860 -- # return 0 00:03:37.514 20:02:24 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:37.514 20:02:24 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:37.514 20:02:24 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:40.796 nvme0n1 00:03:40.796 20:02:27 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:40.796 [2024-05-16 20:02:27.761547] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:40.796 [2024-05-16 20:02:27.761588] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:40.796 request: 00:03:40.796 { 00:03:40.796 "nvme_ctrlr_name": "nvme0", 00:03:40.796 "password": "test", 00:03:40.796 "method": "bdev_nvme_opal_revert", 00:03:40.796 "req_id": 1 00:03:40.796 } 00:03:40.796 Got JSON-RPC error response 00:03:40.796 response: 00:03:40.796 { 00:03:40.796 "code": -32603, 00:03:40.796 "message": "Internal error" 00:03:40.796 } 00:03:40.796 20:02:27 -- common/autotest_common.sh@1600 -- # true 00:03:40.796 20:02:27 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:40.796 20:02:27 -- common/autotest_common.sh@1604 -- # killprocess 88589 00:03:40.796 20:02:27 -- common/autotest_common.sh@946 -- # '[' -z 88589 ']' 00:03:40.796 20:02:27 -- common/autotest_common.sh@950 -- # kill -0 88589 00:03:40.796 20:02:27 -- common/autotest_common.sh@951 -- # uname 00:03:40.796 20:02:27 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:40.796 20:02:27 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88589 00:03:40.796 20:02:27 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:40.796 20:02:27 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:40.796 20:02:27 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88589' 00:03:40.796 killing process with pid 88589 00:03:40.796 20:02:27 -- common/autotest_common.sh@965 -- # kill 88589 00:03:40.796 20:02:27 -- common/autotest_common.sh@970 -- # wait 88589 00:03:42.696 20:02:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:42.696 20:02:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:42.696 20:02:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:42.696 20:02:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:42.696 20:02:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:42.696 20:02:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:42.696 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:03:42.696 20:02:29 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:42.696 20:02:29 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.696 20:02:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.696 20:02:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.696 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:03:42.696 ************************************ 00:03:42.696 START TEST env 00:03:42.696 ************************************ 00:03:42.696 20:02:29 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.696 * Looking for test storage... 00:03:42.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.696 20:02:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.696 20:02:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.696 20:02:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.696 20:02:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.696 ************************************ 00:03:42.696 START TEST env_memory 00:03:42.696 ************************************ 00:03:42.696 20:02:29 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.696 00:03:42.696 00:03:42.696 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.696 http://cunit.sourceforge.net/ 00:03:42.696 00:03:42.696 00:03:42.696 Suite: memory 00:03:42.696 Test: alloc and free memory map ...[2024-05-16 20:02:29.666649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.696 passed 00:03:42.696 Test: mem map translation ...[2024-05-16 20:02:29.687535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.696 [2024-05-16 20:02:29.687557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.696 [2024-05-16 20:02:29.687614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.696 [2024-05-16 20:02:29.687631] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.696 passed 00:03:42.696 Test: mem map registration ...[2024-05-16 20:02:29.729223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:42.696 [2024-05-16 20:02:29.729244] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:42.696 passed 00:03:42.696 Test: mem map adjacent registrations ...passed 00:03:42.696 00:03:42.696 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.696 suites 1 1 n/a 0 0 00:03:42.696 tests 4 4 4 0 0 00:03:42.696 asserts 152 152 152 0 n/a 00:03:42.696 00:03:42.696 Elapsed time = 0.143 seconds 00:03:42.696 00:03:42.696 real 0m0.151s 00:03:42.696 user 0m0.144s 00:03:42.696 sys 0m0.007s 00:03:42.696 20:02:29 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.696 20:02:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.696 ************************************ 00:03:42.696 END TEST env_memory 00:03:42.696 ************************************ 00:03:42.696 20:02:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.696 20:02:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.696 20:02:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.696 20:02:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.696 ************************************ 00:03:42.696 START TEST env_vtophys 00:03:42.696 ************************************ 00:03:42.696 20:02:29 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.697 EAL: lib.eal log level changed from notice to debug 00:03:42.697 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.697 EAL: Detected lcore 1 as core 1 on socket 0 00:03:42.697 EAL: Detected lcore 2 as core 2 on socket 0 00:03:42.697 EAL: Detected lcore 3 as core 3 on socket 0 00:03:42.697 EAL: Detected lcore 4 as core 4 on socket 0 00:03:42.697 EAL: Detected lcore 5 as core 5 on socket 0 00:03:42.697 EAL: Detected lcore 6 as core 8 on socket 0 00:03:42.697 EAL: Detected lcore 7 as core 9 on socket 0 00:03:42.697 EAL: Detected lcore 8 as core 10 on socket 0 00:03:42.697 EAL: Detected lcore 9 as core 11 on socket 0 00:03:42.697 EAL: Detected lcore 10 as core 12 on socket 0 00:03:42.697 EAL: Detected lcore 11 as core 13 on socket 0 00:03:42.697 EAL: Detected lcore 12 as core 0 on socket 1 00:03:42.697 EAL: Detected lcore 13 as core 1 on socket 1 00:03:42.697 EAL: Detected lcore 14 as core 2 on socket 1 00:03:42.697 EAL: Detected lcore 15 as core 3 on socket 1 00:03:42.697 EAL: Detected lcore 16 as core 4 on socket 1 00:03:42.697 EAL: Detected lcore 17 as core 5 on socket 1 00:03:42.697 EAL: Detected lcore 18 as core 8 on socket 1 00:03:42.697 EAL: Detected lcore 19 as core 9 on socket 1 00:03:42.697 EAL: Detected lcore 20 as core 10 on socket 1 00:03:42.697 EAL: Detected lcore 21 as core 11 on socket 1 00:03:42.697 EAL: Detected lcore 22 as core 12 on socket 1 00:03:42.697 EAL: Detected lcore 23 as core 13 on socket 1 00:03:42.697 EAL: Detected lcore 24 as core 0 on socket 0 00:03:42.697 EAL: Detected lcore 25 as core 1 on socket 0 00:03:42.697 EAL: Detected lcore 26 as core 2 on socket 0 00:03:42.697 EAL: Detected lcore 27 as core 3 on socket 0 00:03:42.697 EAL: Detected lcore 28 as core 4 on socket 0 00:03:42.697 EAL: Detected lcore 29 as core 5 on socket 0 00:03:42.697 EAL: Detected lcore 30 as core 8 on socket 0 00:03:42.697 EAL: Detected lcore 31 as core 9 on socket 0 00:03:42.697 EAL: Detected lcore 32 as core 10 on socket 0 00:03:42.697 EAL: Detected lcore 33 as core 11 on socket 0 00:03:42.697 EAL: Detected lcore 34 as core 12 on socket 0 00:03:42.697 EAL: Detected lcore 35 as core 13 on socket 0 00:03:42.697 EAL: Detected lcore 36 as core 0 on socket 1 00:03:42.697 EAL: Detected lcore 37 as core 1 on socket 1 00:03:42.697 EAL: Detected lcore 38 as core 2 on socket 1 00:03:42.697 EAL: Detected lcore 39 as core 3 on socket 1 00:03:42.697 EAL: Detected lcore 40 as core 4 on socket 1 00:03:42.697 EAL: Detected lcore 41 as core 5 on socket 1 00:03:42.697 EAL: Detected lcore 42 as core 8 on socket 1 00:03:42.697 EAL: Detected lcore 43 as core 9 on socket 1 00:03:42.697 EAL: Detected lcore 44 as core 10 on socket 1 00:03:42.697 EAL: Detected lcore 45 as core 11 on socket 1 00:03:42.697 EAL: Detected lcore 46 as core 12 on socket 1 00:03:42.697 EAL: Detected lcore 47 as core 13 on socket 1 00:03:42.956 EAL: Maximum logical cores by configuration: 128 00:03:42.956 EAL: Detected CPU lcores: 48 00:03:42.956 EAL: Detected NUMA nodes: 2 00:03:42.956 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:42.956 EAL: Detected shared linkage of DPDK 00:03:42.956 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.956 EAL: Bus pci wants IOVA as 'DC' 00:03:42.956 EAL: Buses did not request a specific IOVA mode. 00:03:42.956 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:42.956 EAL: Selected IOVA mode 'VA' 00:03:42.956 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.956 EAL: Probing VFIO support... 00:03:42.956 EAL: IOMMU type 1 (Type 1) is supported 00:03:42.956 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:42.956 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:42.956 EAL: VFIO support initialized 00:03:42.956 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.956 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.956 EAL: Setting up physically contiguous memory... 00:03:42.956 EAL: Setting maximum number of open files to 524288 00:03:42.956 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.956 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:42.956 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.956 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:42.956 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.956 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:42.956 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.956 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.956 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:42.956 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:42.956 EAL: Hugepages will be freed exactly as allocated. 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: TSC frequency is ~2700000 KHz 00:03:42.956 EAL: Main lcore 0 is ready (tid=7f25f31d9a00;cpuset=[0]) 00:03:42.956 EAL: Trying to obtain current memory policy. 00:03:42.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.956 EAL: Restoring previous memory policy: 0 00:03:42.956 EAL: request: mp_malloc_sync 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: Heap on socket 0 was expanded by 2MB 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:42.956 EAL: Mem event callback 'spdk:(nil)' registered 00:03:42.956 00:03:42.956 00:03:42.956 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.956 http://cunit.sourceforge.net/ 00:03:42.956 00:03:42.956 00:03:42.956 Suite: components_suite 00:03:42.956 Test: vtophys_malloc_test ...passed 00:03:42.956 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:42.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.956 EAL: Restoring previous memory policy: 4 00:03:42.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.956 EAL: request: mp_malloc_sync 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: Heap on socket 0 was expanded by 4MB 00:03:42.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.956 EAL: request: mp_malloc_sync 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: Heap on socket 0 was shrunk by 4MB 00:03:42.956 EAL: Trying to obtain current memory policy. 00:03:42.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.956 EAL: Restoring previous memory policy: 4 00:03:42.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.956 EAL: request: mp_malloc_sync 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: Heap on socket 0 was expanded by 6MB 00:03:42.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.956 EAL: request: mp_malloc_sync 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.956 EAL: Heap on socket 0 was shrunk by 6MB 00:03:42.956 EAL: Trying to obtain current memory policy. 00:03:42.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.956 EAL: Restoring previous memory policy: 4 00:03:42.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.956 EAL: request: mp_malloc_sync 00:03:42.956 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was expanded by 10MB 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was shrunk by 10MB 00:03:42.957 EAL: Trying to obtain current memory policy. 00:03:42.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.957 EAL: Restoring previous memory policy: 4 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was expanded by 18MB 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was shrunk by 18MB 00:03:42.957 EAL: Trying to obtain current memory policy. 00:03:42.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.957 EAL: Restoring previous memory policy: 4 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was expanded by 34MB 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was shrunk by 34MB 00:03:42.957 EAL: Trying to obtain current memory policy. 00:03:42.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.957 EAL: Restoring previous memory policy: 4 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was expanded by 66MB 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was shrunk by 66MB 00:03:42.957 EAL: Trying to obtain current memory policy. 00:03:42.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.957 EAL: Restoring previous memory policy: 4 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was expanded by 130MB 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was shrunk by 130MB 00:03:42.957 EAL: Trying to obtain current memory policy. 00:03:42.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.957 EAL: Restoring previous memory policy: 4 00:03:42.957 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.957 EAL: request: mp_malloc_sync 00:03:42.957 EAL: No shared files mode enabled, IPC is disabled 00:03:42.957 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.214 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.214 EAL: request: mp_malloc_sync 00:03:43.214 EAL: No shared files mode enabled, IPC is disabled 00:03:43.214 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.214 EAL: Trying to obtain current memory policy. 00:03:43.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.214 EAL: Restoring previous memory policy: 4 00:03:43.214 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.214 EAL: request: mp_malloc_sync 00:03:43.214 EAL: No shared files mode enabled, IPC is disabled 00:03:43.214 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.472 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.472 EAL: request: mp_malloc_sync 00:03:43.472 EAL: No shared files mode enabled, IPC is disabled 00:03:43.472 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.472 EAL: Trying to obtain current memory policy. 00:03:43.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.730 EAL: Restoring previous memory policy: 4 00:03:43.730 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.730 EAL: request: mp_malloc_sync 00:03:43.730 EAL: No shared files mode enabled, IPC is disabled 00:03:43.730 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.246 EAL: request: mp_malloc_sync 00:03:44.246 EAL: No shared files mode enabled, IPC is disabled 00:03:44.246 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.246 passed 00:03:44.246 00:03:44.246 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.246 suites 1 1 n/a 0 0 00:03:44.246 tests 2 2 2 0 0 00:03:44.246 asserts 497 497 497 0 n/a 00:03:44.246 00:03:44.246 Elapsed time = 1.243 seconds 00:03:44.246 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.246 EAL: request: mp_malloc_sync 00:03:44.246 EAL: No shared files mode enabled, IPC is disabled 00:03:44.246 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.246 EAL: No shared files mode enabled, IPC is disabled 00:03:44.246 EAL: No shared files mode enabled, IPC is disabled 00:03:44.246 EAL: No shared files mode enabled, IPC is disabled 00:03:44.246 00:03:44.246 real 0m1.350s 00:03:44.246 user 0m0.767s 00:03:44.246 sys 0m0.550s 00:03:44.246 20:02:31 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:44.246 20:02:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:44.246 ************************************ 00:03:44.246 END TEST env_vtophys 00:03:44.246 ************************************ 00:03:44.246 20:02:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.246 20:02:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.246 20:02:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.246 20:02:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.246 ************************************ 00:03:44.246 START TEST env_pci 00:03:44.246 ************************************ 00:03:44.246 20:02:31 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.246 00:03:44.246 00:03:44.246 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.246 http://cunit.sourceforge.net/ 00:03:44.246 00:03:44.246 00:03:44.246 Suite: pci 00:03:44.246 Test: pci_hook ...[2024-05-16 20:02:31.237139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 89477 has claimed it 00:03:44.246 EAL: Cannot find device (10000:00:01.0) 00:03:44.246 EAL: Failed to attach device on primary process 00:03:44.246 passed 00:03:44.246 00:03:44.246 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.246 suites 1 1 n/a 0 0 00:03:44.246 tests 1 1 1 0 0 00:03:44.246 asserts 25 25 25 0 n/a 00:03:44.246 00:03:44.246 Elapsed time = 0.019 seconds 00:03:44.246 00:03:44.246 real 0m0.029s 00:03:44.246 user 0m0.015s 00:03:44.246 sys 0m0.014s 00:03:44.246 20:02:31 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:44.246 20:02:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:44.246 ************************************ 00:03:44.246 END TEST env_pci 00:03:44.246 ************************************ 00:03:44.246 20:02:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.246 20:02:31 env -- env/env.sh@15 -- # uname 00:03:44.246 20:02:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.246 20:02:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.247 20:02:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.247 20:02:31 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:44.247 20:02:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.247 20:02:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.247 ************************************ 00:03:44.247 START TEST env_dpdk_post_init 00:03:44.247 ************************************ 00:03:44.247 20:02:31 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.247 EAL: Detected CPU lcores: 48 00:03:44.247 EAL: Detected NUMA nodes: 2 00:03:44.247 EAL: Detected shared linkage of DPDK 00:03:44.247 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.247 EAL: Selected IOVA mode 'VA' 00:03:44.247 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.247 EAL: VFIO support initialized 00:03:44.247 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.505 EAL: Using IOMMU type 1 (Type 1) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:44.505 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:45.444 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:45.444 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:48.725 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:48.725 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:48.725 Starting DPDK initialization... 00:03:48.725 Starting SPDK post initialization... 00:03:48.725 SPDK NVMe probe 00:03:48.725 Attaching to 0000:0b:00.0 00:03:48.725 Attached to 0000:0b:00.0 00:03:48.725 Cleaning up... 00:03:48.725 00:03:48.725 real 0m4.306s 00:03:48.725 user 0m3.167s 00:03:48.725 sys 0m0.201s 00:03:48.725 20:02:35 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.725 20:02:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.725 ************************************ 00:03:48.725 END TEST env_dpdk_post_init 00:03:48.725 ************************************ 00:03:48.725 20:02:35 env -- env/env.sh@26 -- # uname 00:03:48.725 20:02:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.725 20:02:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.725 20:02:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.725 20:02:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.725 20:02:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.725 ************************************ 00:03:48.725 START TEST env_mem_callbacks 00:03:48.725 ************************************ 00:03:48.725 20:02:35 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.725 EAL: Detected CPU lcores: 48 00:03:48.725 EAL: Detected NUMA nodes: 2 00:03:48.725 EAL: Detected shared linkage of DPDK 00:03:48.725 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.725 EAL: Selected IOVA mode 'VA' 00:03:48.725 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.725 EAL: VFIO support initialized 00:03:48.725 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.725 00:03:48.725 00:03:48.725 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.725 http://cunit.sourceforge.net/ 00:03:48.725 00:03:48.725 00:03:48.725 Suite: memory 00:03:48.725 Test: test ... 00:03:48.725 register 0x200000200000 2097152 00:03:48.725 malloc 3145728 00:03:48.725 register 0x200000400000 4194304 00:03:48.725 buf 0x200000500000 len 3145728 PASSED 00:03:48.725 malloc 64 00:03:48.725 buf 0x2000004fff40 len 64 PASSED 00:03:48.725 malloc 4194304 00:03:48.725 register 0x200000800000 6291456 00:03:48.725 buf 0x200000a00000 len 4194304 PASSED 00:03:48.725 free 0x200000500000 3145728 00:03:48.725 free 0x2000004fff40 64 00:03:48.725 unregister 0x200000400000 4194304 PASSED 00:03:48.725 free 0x200000a00000 4194304 00:03:48.725 unregister 0x200000800000 6291456 PASSED 00:03:48.725 malloc 8388608 00:03:48.725 register 0x200000400000 10485760 00:03:48.725 buf 0x200000600000 len 8388608 PASSED 00:03:48.725 free 0x200000600000 8388608 00:03:48.725 unregister 0x200000400000 10485760 PASSED 00:03:48.725 passed 00:03:48.725 00:03:48.725 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.725 suites 1 1 n/a 0 0 00:03:48.725 tests 1 1 1 0 0 00:03:48.725 asserts 15 15 15 0 n/a 00:03:48.725 00:03:48.725 Elapsed time = 0.005 seconds 00:03:48.725 00:03:48.725 real 0m0.049s 00:03:48.725 user 0m0.011s 00:03:48.725 sys 0m0.038s 00:03:48.725 20:02:35 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.725 20:02:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.725 ************************************ 00:03:48.725 END TEST env_mem_callbacks 00:03:48.725 ************************************ 00:03:48.725 00:03:48.725 real 0m6.182s 00:03:48.725 user 0m4.225s 00:03:48.725 sys 0m0.993s 00:03:48.725 20:02:35 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.725 20:02:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.725 ************************************ 00:03:48.725 END TEST env 00:03:48.725 ************************************ 00:03:48.725 20:02:35 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.725 20:02:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.725 20:02:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.725 20:02:35 -- common/autotest_common.sh@10 -- # set +x 00:03:48.725 ************************************ 00:03:48.725 START TEST rpc 00:03:48.725 ************************************ 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.725 * Looking for test storage... 00:03:48.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.725 20:02:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=90132 00:03:48.725 20:02:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:48.725 20:02:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.725 20:02:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 90132 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@827 -- # '[' -z 90132 ']' 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:48.725 20:02:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.983 [2024-05-16 20:02:35.882177] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:03:48.983 [2024-05-16 20:02:35.882254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90132 ] 00:03:48.983 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.983 [2024-05-16 20:02:35.938213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.983 [2024-05-16 20:02:36.046467] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:48.983 [2024-05-16 20:02:36.046522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 90132' to capture a snapshot of events at runtime. 00:03:48.983 [2024-05-16 20:02:36.046551] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:48.983 [2024-05-16 20:02:36.046563] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:48.983 [2024-05-16 20:02:36.046573] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid90132 for offline analysis/debug. 00:03:48.983 [2024-05-16 20:02:36.046601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.242 20:02:36 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:49.242 20:02:36 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:49.242 20:02:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.242 20:02:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.242 20:02:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.242 20:02:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.242 20:02:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.242 20:02:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.242 20:02:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.242 ************************************ 00:03:49.242 START TEST rpc_integrity 00:03:49.242 ************************************ 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.242 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.242 { 00:03:49.242 "name": "Malloc0", 00:03:49.242 "aliases": [ 00:03:49.242 "73f67f24-7b37-44b9-bf8e-9a82ca138aa8" 00:03:49.242 ], 00:03:49.242 "product_name": "Malloc disk", 00:03:49.242 "block_size": 512, 00:03:49.242 "num_blocks": 16384, 00:03:49.242 "uuid": "73f67f24-7b37-44b9-bf8e-9a82ca138aa8", 00:03:49.242 "assigned_rate_limits": { 00:03:49.242 "rw_ios_per_sec": 0, 00:03:49.242 "rw_mbytes_per_sec": 0, 00:03:49.242 "r_mbytes_per_sec": 0, 00:03:49.242 "w_mbytes_per_sec": 0 00:03:49.242 }, 00:03:49.242 "claimed": false, 00:03:49.242 "zoned": false, 00:03:49.242 "supported_io_types": { 00:03:49.242 "read": true, 00:03:49.242 "write": true, 00:03:49.242 "unmap": true, 00:03:49.242 "write_zeroes": true, 00:03:49.242 "flush": true, 00:03:49.242 "reset": true, 00:03:49.242 "compare": false, 00:03:49.242 "compare_and_write": false, 00:03:49.242 "abort": true, 00:03:49.242 "nvme_admin": false, 00:03:49.242 "nvme_io": false 00:03:49.242 }, 00:03:49.242 "memory_domains": [ 00:03:49.242 { 00:03:49.242 "dma_device_id": "system", 00:03:49.242 "dma_device_type": 1 00:03:49.242 }, 00:03:49.242 { 00:03:49.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.242 "dma_device_type": 2 00:03:49.242 } 00:03:49.242 ], 00:03:49.242 "driver_specific": {} 00:03:49.242 } 00:03:49.242 ]' 00:03:49.242 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 [2024-05-16 20:02:36.408129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.501 [2024-05-16 20:02:36.408184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.501 [2024-05-16 20:02:36.408204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d30b00 00:03:49.501 [2024-05-16 20:02:36.408217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.501 [2024-05-16 20:02:36.409681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.501 [2024-05-16 20:02:36.409703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.501 Passthru0 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.501 { 00:03:49.501 "name": "Malloc0", 00:03:49.501 "aliases": [ 00:03:49.501 "73f67f24-7b37-44b9-bf8e-9a82ca138aa8" 00:03:49.501 ], 00:03:49.501 "product_name": "Malloc disk", 00:03:49.501 "block_size": 512, 00:03:49.501 "num_blocks": 16384, 00:03:49.501 "uuid": "73f67f24-7b37-44b9-bf8e-9a82ca138aa8", 00:03:49.501 "assigned_rate_limits": { 00:03:49.501 "rw_ios_per_sec": 0, 00:03:49.501 "rw_mbytes_per_sec": 0, 00:03:49.501 "r_mbytes_per_sec": 0, 00:03:49.501 "w_mbytes_per_sec": 0 00:03:49.501 }, 00:03:49.501 "claimed": true, 00:03:49.501 "claim_type": "exclusive_write", 00:03:49.501 "zoned": false, 00:03:49.501 "supported_io_types": { 00:03:49.501 "read": true, 00:03:49.501 "write": true, 00:03:49.501 "unmap": true, 00:03:49.501 "write_zeroes": true, 00:03:49.501 "flush": true, 00:03:49.501 "reset": true, 00:03:49.501 "compare": false, 00:03:49.501 "compare_and_write": false, 00:03:49.501 "abort": true, 00:03:49.501 "nvme_admin": false, 00:03:49.501 "nvme_io": false 00:03:49.501 }, 00:03:49.501 "memory_domains": [ 00:03:49.501 { 00:03:49.501 "dma_device_id": "system", 00:03:49.501 "dma_device_type": 1 00:03:49.501 }, 00:03:49.501 { 00:03:49.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.501 "dma_device_type": 2 00:03:49.501 } 00:03:49.501 ], 00:03:49.501 "driver_specific": {} 00:03:49.501 }, 00:03:49.501 { 00:03:49.501 "name": "Passthru0", 00:03:49.501 "aliases": [ 00:03:49.501 "07709e5b-62f5-51b9-8b0a-7e230b917974" 00:03:49.501 ], 00:03:49.501 "product_name": "passthru", 00:03:49.501 "block_size": 512, 00:03:49.501 "num_blocks": 16384, 00:03:49.501 "uuid": "07709e5b-62f5-51b9-8b0a-7e230b917974", 00:03:49.501 "assigned_rate_limits": { 00:03:49.501 "rw_ios_per_sec": 0, 00:03:49.501 "rw_mbytes_per_sec": 0, 00:03:49.501 "r_mbytes_per_sec": 0, 00:03:49.501 "w_mbytes_per_sec": 0 00:03:49.501 }, 00:03:49.501 "claimed": false, 00:03:49.501 "zoned": false, 00:03:49.501 "supported_io_types": { 00:03:49.501 "read": true, 00:03:49.501 "write": true, 00:03:49.501 "unmap": true, 00:03:49.501 "write_zeroes": true, 00:03:49.501 "flush": true, 00:03:49.501 "reset": true, 00:03:49.501 "compare": false, 00:03:49.501 "compare_and_write": false, 00:03:49.501 "abort": true, 00:03:49.501 "nvme_admin": false, 00:03:49.501 "nvme_io": false 00:03:49.501 }, 00:03:49.501 "memory_domains": [ 00:03:49.501 { 00:03:49.501 "dma_device_id": "system", 00:03:49.501 "dma_device_type": 1 00:03:49.501 }, 00:03:49.501 { 00:03:49.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.501 "dma_device_type": 2 00:03:49.501 } 00:03:49.501 ], 00:03:49.501 "driver_specific": { 00:03:49.501 "passthru": { 00:03:49.501 "name": "Passthru0", 00:03:49.501 "base_bdev_name": "Malloc0" 00:03:49.501 } 00:03:49.501 } 00:03:49.501 } 00:03:49.501 ]' 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.501 20:02:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.501 00:03:49.501 real 0m0.212s 00:03:49.501 user 0m0.133s 00:03:49.501 sys 0m0.021s 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 ************************************ 00:03:49.501 END TEST rpc_integrity 00:03:49.501 ************************************ 00:03:49.501 20:02:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.501 20:02:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.501 20:02:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.501 20:02:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 ************************************ 00:03:49.501 START TEST rpc_plugins 00:03:49.501 ************************************ 00:03:49.501 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:49.501 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.501 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.501 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.501 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.501 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.501 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.502 { 00:03:49.502 "name": "Malloc1", 00:03:49.502 "aliases": [ 00:03:49.502 "8b9ea3fc-3e28-4339-8964-418ab0e0545a" 00:03:49.502 ], 00:03:49.502 "product_name": "Malloc disk", 00:03:49.502 "block_size": 4096, 00:03:49.502 "num_blocks": 256, 00:03:49.502 "uuid": "8b9ea3fc-3e28-4339-8964-418ab0e0545a", 00:03:49.502 "assigned_rate_limits": { 00:03:49.502 "rw_ios_per_sec": 0, 00:03:49.502 "rw_mbytes_per_sec": 0, 00:03:49.502 "r_mbytes_per_sec": 0, 00:03:49.502 "w_mbytes_per_sec": 0 00:03:49.502 }, 00:03:49.502 "claimed": false, 00:03:49.502 "zoned": false, 00:03:49.502 "supported_io_types": { 00:03:49.502 "read": true, 00:03:49.502 "write": true, 00:03:49.502 "unmap": true, 00:03:49.502 "write_zeroes": true, 00:03:49.502 "flush": true, 00:03:49.502 "reset": true, 00:03:49.502 "compare": false, 00:03:49.502 "compare_and_write": false, 00:03:49.502 "abort": true, 00:03:49.502 "nvme_admin": false, 00:03:49.502 "nvme_io": false 00:03:49.502 }, 00:03:49.502 "memory_domains": [ 00:03:49.502 { 00:03:49.502 "dma_device_id": "system", 00:03:49.502 "dma_device_type": 1 00:03:49.502 }, 00:03:49.502 { 00:03:49.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.502 "dma_device_type": 2 00:03:49.502 } 00:03:49.502 ], 00:03:49.502 "driver_specific": {} 00:03:49.502 } 00:03:49.502 ]' 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.502 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:49.502 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:49.760 20:02:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:49.760 00:03:49.760 real 0m0.105s 00:03:49.760 user 0m0.065s 00:03:49.760 sys 0m0.010s 00:03:49.760 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.760 20:02:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.760 ************************************ 00:03:49.760 END TEST rpc_plugins 00:03:49.760 ************************************ 00:03:49.760 20:02:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:49.760 20:02:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.760 20:02:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.760 20:02:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.760 ************************************ 00:03:49.760 START TEST rpc_trace_cmd_test 00:03:49.760 ************************************ 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:49.760 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid90132", 00:03:49.760 "tpoint_group_mask": "0x8", 00:03:49.760 "iscsi_conn": { 00:03:49.760 "mask": "0x2", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "scsi": { 00:03:49.760 "mask": "0x4", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "bdev": { 00:03:49.760 "mask": "0x8", 00:03:49.760 "tpoint_mask": "0xffffffffffffffff" 00:03:49.760 }, 00:03:49.760 "nvmf_rdma": { 00:03:49.760 "mask": "0x10", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "nvmf_tcp": { 00:03:49.760 "mask": "0x20", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "ftl": { 00:03:49.760 "mask": "0x40", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "blobfs": { 00:03:49.760 "mask": "0x80", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "dsa": { 00:03:49.760 "mask": "0x200", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "thread": { 00:03:49.760 "mask": "0x400", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "nvme_pcie": { 00:03:49.760 "mask": "0x800", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "iaa": { 00:03:49.760 "mask": "0x1000", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "nvme_tcp": { 00:03:49.760 "mask": "0x2000", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "bdev_nvme": { 00:03:49.760 "mask": "0x4000", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 }, 00:03:49.760 "sock": { 00:03:49.760 "mask": "0x8000", 00:03:49.760 "tpoint_mask": "0x0" 00:03:49.760 } 00:03:49.760 }' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:49.760 00:03:49.760 real 0m0.179s 00:03:49.760 user 0m0.159s 00:03:49.760 sys 0m0.014s 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.760 20:02:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.760 ************************************ 00:03:49.760 END TEST rpc_trace_cmd_test 00:03:49.760 ************************************ 00:03:50.018 20:02:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:50.018 20:02:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:50.018 20:02:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:50.018 20:02:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.018 20:02:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.018 20:02:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 ************************************ 00:03:50.018 START TEST rpc_daemon_integrity 00:03:50.018 ************************************ 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.018 { 00:03:50.018 "name": "Malloc2", 00:03:50.018 "aliases": [ 00:03:50.018 "cb310239-009d-46a9-b439-28e53bed4fba" 00:03:50.018 ], 00:03:50.018 "product_name": "Malloc disk", 00:03:50.018 "block_size": 512, 00:03:50.018 "num_blocks": 16384, 00:03:50.018 "uuid": "cb310239-009d-46a9-b439-28e53bed4fba", 00:03:50.018 "assigned_rate_limits": { 00:03:50.018 "rw_ios_per_sec": 0, 00:03:50.018 "rw_mbytes_per_sec": 0, 00:03:50.018 "r_mbytes_per_sec": 0, 00:03:50.018 "w_mbytes_per_sec": 0 00:03:50.018 }, 00:03:50.018 "claimed": false, 00:03:50.018 "zoned": false, 00:03:50.018 "supported_io_types": { 00:03:50.018 "read": true, 00:03:50.018 "write": true, 00:03:50.018 "unmap": true, 00:03:50.018 "write_zeroes": true, 00:03:50.018 "flush": true, 00:03:50.018 "reset": true, 00:03:50.018 "compare": false, 00:03:50.018 "compare_and_write": false, 00:03:50.018 "abort": true, 00:03:50.018 "nvme_admin": false, 00:03:50.018 "nvme_io": false 00:03:50.018 }, 00:03:50.018 "memory_domains": [ 00:03:50.018 { 00:03:50.018 "dma_device_id": "system", 00:03:50.018 "dma_device_type": 1 00:03:50.018 }, 00:03:50.018 { 00:03:50.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.018 "dma_device_type": 2 00:03:50.018 } 00:03:50.018 ], 00:03:50.018 "driver_specific": {} 00:03:50.018 } 00:03:50.018 ]' 00:03:50.018 20:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.018 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.018 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:50.018 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.018 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 [2024-05-16 20:02:37.033949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:50.019 [2024-05-16 20:02:37.034007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.019 [2024-05-16 20:02:37.034035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d318a0 00:03:50.019 [2024-05-16 20:02:37.034050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.019 [2024-05-16 20:02:37.035273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.019 [2024-05-16 20:02:37.035295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.019 Passthru0 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.019 { 00:03:50.019 "name": "Malloc2", 00:03:50.019 "aliases": [ 00:03:50.019 "cb310239-009d-46a9-b439-28e53bed4fba" 00:03:50.019 ], 00:03:50.019 "product_name": "Malloc disk", 00:03:50.019 "block_size": 512, 00:03:50.019 "num_blocks": 16384, 00:03:50.019 "uuid": "cb310239-009d-46a9-b439-28e53bed4fba", 00:03:50.019 "assigned_rate_limits": { 00:03:50.019 "rw_ios_per_sec": 0, 00:03:50.019 "rw_mbytes_per_sec": 0, 00:03:50.019 "r_mbytes_per_sec": 0, 00:03:50.019 "w_mbytes_per_sec": 0 00:03:50.019 }, 00:03:50.019 "claimed": true, 00:03:50.019 "claim_type": "exclusive_write", 00:03:50.019 "zoned": false, 00:03:50.019 "supported_io_types": { 00:03:50.019 "read": true, 00:03:50.019 "write": true, 00:03:50.019 "unmap": true, 00:03:50.019 "write_zeroes": true, 00:03:50.019 "flush": true, 00:03:50.019 "reset": true, 00:03:50.019 "compare": false, 00:03:50.019 "compare_and_write": false, 00:03:50.019 "abort": true, 00:03:50.019 "nvme_admin": false, 00:03:50.019 "nvme_io": false 00:03:50.019 }, 00:03:50.019 "memory_domains": [ 00:03:50.019 { 00:03:50.019 "dma_device_id": "system", 00:03:50.019 "dma_device_type": 1 00:03:50.019 }, 00:03:50.019 { 00:03:50.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.019 "dma_device_type": 2 00:03:50.019 } 00:03:50.019 ], 00:03:50.019 "driver_specific": {} 00:03:50.019 }, 00:03:50.019 { 00:03:50.019 "name": "Passthru0", 00:03:50.019 "aliases": [ 00:03:50.019 "ee2423bc-674d-5766-a7e9-92b40ffc0c23" 00:03:50.019 ], 00:03:50.019 "product_name": "passthru", 00:03:50.019 "block_size": 512, 00:03:50.019 "num_blocks": 16384, 00:03:50.019 "uuid": "ee2423bc-674d-5766-a7e9-92b40ffc0c23", 00:03:50.019 "assigned_rate_limits": { 00:03:50.019 "rw_ios_per_sec": 0, 00:03:50.019 "rw_mbytes_per_sec": 0, 00:03:50.019 "r_mbytes_per_sec": 0, 00:03:50.019 "w_mbytes_per_sec": 0 00:03:50.019 }, 00:03:50.019 "claimed": false, 00:03:50.019 "zoned": false, 00:03:50.019 "supported_io_types": { 00:03:50.019 "read": true, 00:03:50.019 "write": true, 00:03:50.019 "unmap": true, 00:03:50.019 "write_zeroes": true, 00:03:50.019 "flush": true, 00:03:50.019 "reset": true, 00:03:50.019 "compare": false, 00:03:50.019 "compare_and_write": false, 00:03:50.019 "abort": true, 00:03:50.019 "nvme_admin": false, 00:03:50.019 "nvme_io": false 00:03:50.019 }, 00:03:50.019 "memory_domains": [ 00:03:50.019 { 00:03:50.019 "dma_device_id": "system", 00:03:50.019 "dma_device_type": 1 00:03:50.019 }, 00:03:50.019 { 00:03:50.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.019 "dma_device_type": 2 00:03:50.019 } 00:03:50.019 ], 00:03:50.019 "driver_specific": { 00:03:50.019 "passthru": { 00:03:50.019 "name": "Passthru0", 00:03:50.019 "base_bdev_name": "Malloc2" 00:03:50.019 } 00:03:50.019 } 00:03:50.019 } 00:03:50.019 ]' 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.019 00:03:50.019 real 0m0.208s 00:03:50.019 user 0m0.128s 00:03:50.019 sys 0m0.024s 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.019 20:02:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.019 ************************************ 00:03:50.019 END TEST rpc_daemon_integrity 00:03:50.019 ************************************ 00:03:50.278 20:02:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:50.278 20:02:37 rpc -- rpc/rpc.sh@84 -- # killprocess 90132 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@946 -- # '[' -z 90132 ']' 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@950 -- # kill -0 90132 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@951 -- # uname 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90132 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90132' 00:03:50.278 killing process with pid 90132 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@965 -- # kill 90132 00:03:50.278 20:02:37 rpc -- common/autotest_common.sh@970 -- # wait 90132 00:03:50.537 00:03:50.537 real 0m1.810s 00:03:50.537 user 0m2.270s 00:03:50.537 sys 0m0.533s 00:03:50.537 20:02:37 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.537 20:02:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.537 ************************************ 00:03:50.537 END TEST rpc 00:03:50.537 ************************************ 00:03:50.537 20:02:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.537 20:02:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.537 20:02:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.537 20:02:37 -- common/autotest_common.sh@10 -- # set +x 00:03:50.537 ************************************ 00:03:50.537 START TEST skip_rpc 00:03:50.537 ************************************ 00:03:50.537 20:02:37 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.795 * Looking for test storage... 00:03:50.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.795 20:02:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.795 20:02:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:50.795 20:02:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:50.795 20:02:37 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.795 20:02:37 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.795 20:02:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.795 ************************************ 00:03:50.795 START TEST skip_rpc 00:03:50.795 ************************************ 00:03:50.795 20:02:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:50.795 20:02:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=90560 00:03:50.795 20:02:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:50.795 20:02:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.795 20:02:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:50.795 [2024-05-16 20:02:37.777549] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:03:50.795 [2024-05-16 20:02:37.777640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90560 ] 00:03:50.795 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.795 [2024-05-16 20:02:37.834564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.053 [2024-05-16 20:02:37.942804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 90560 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 90560 ']' 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 90560 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90560 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90560' 00:03:56.317 killing process with pid 90560 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 90560 00:03:56.317 20:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 90560 00:03:56.317 00:03:56.317 real 0m5.435s 00:03:56.317 user 0m5.165s 00:03:56.317 sys 0m0.276s 00:03:56.317 20:02:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.317 20:02:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.317 ************************************ 00:03:56.317 END TEST skip_rpc 00:03:56.317 ************************************ 00:03:56.317 20:02:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.317 20:02:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.317 20:02:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.317 20:02:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.317 ************************************ 00:03:56.317 START TEST skip_rpc_with_json 00:03:56.317 ************************************ 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=91259 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 91259 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 91259 ']' 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:56.317 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.317 [2024-05-16 20:02:43.267236] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:03:56.317 [2024-05-16 20:02:43.267317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91259 ] 00:03:56.317 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.317 [2024-05-16 20:02:43.322693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.317 [2024-05-16 20:02:43.424022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.585 [2024-05-16 20:02:43.648952] nvmf_rpc.c:2548:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:56.585 request: 00:03:56.585 { 00:03:56.585 "trtype": "tcp", 00:03:56.585 "method": "nvmf_get_transports", 00:03:56.585 "req_id": 1 00:03:56.585 } 00:03:56.585 Got JSON-RPC error response 00:03:56.585 response: 00:03:56.585 { 00:03:56.585 "code": -19, 00:03:56.585 "message": "No such device" 00:03:56.585 } 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.585 [2024-05-16 20:02:43.657061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.585 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.847 { 00:03:56.847 "subsystems": [ 00:03:56.847 { 00:03:56.847 "subsystem": "vfio_user_target", 00:03:56.847 "config": null 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "keyring", 00:03:56.847 "config": [] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "iobuf", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "iobuf_set_options", 00:03:56.847 "params": { 00:03:56.847 "small_pool_count": 8192, 00:03:56.847 "large_pool_count": 1024, 00:03:56.847 "small_bufsize": 8192, 00:03:56.847 "large_bufsize": 135168 00:03:56.847 } 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "sock", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "sock_set_default_impl", 00:03:56.847 "params": { 00:03:56.847 "impl_name": "posix" 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "sock_impl_set_options", 00:03:56.847 "params": { 00:03:56.847 "impl_name": "ssl", 00:03:56.847 "recv_buf_size": 4096, 00:03:56.847 "send_buf_size": 4096, 00:03:56.847 "enable_recv_pipe": true, 00:03:56.847 "enable_quickack": false, 00:03:56.847 "enable_placement_id": 0, 00:03:56.847 "enable_zerocopy_send_server": true, 00:03:56.847 "enable_zerocopy_send_client": false, 00:03:56.847 "zerocopy_threshold": 0, 00:03:56.847 "tls_version": 0, 00:03:56.847 "enable_ktls": false 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "sock_impl_set_options", 00:03:56.847 "params": { 00:03:56.847 "impl_name": "posix", 00:03:56.847 "recv_buf_size": 2097152, 00:03:56.847 "send_buf_size": 2097152, 00:03:56.847 "enable_recv_pipe": true, 00:03:56.847 "enable_quickack": false, 00:03:56.847 "enable_placement_id": 0, 00:03:56.847 "enable_zerocopy_send_server": true, 00:03:56.847 "enable_zerocopy_send_client": false, 00:03:56.847 "zerocopy_threshold": 0, 00:03:56.847 "tls_version": 0, 00:03:56.847 "enable_ktls": false 00:03:56.847 } 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "vmd", 00:03:56.847 "config": [] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "accel", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "accel_set_options", 00:03:56.847 "params": { 00:03:56.847 "small_cache_size": 128, 00:03:56.847 "large_cache_size": 16, 00:03:56.847 "task_count": 2048, 00:03:56.847 "sequence_count": 2048, 00:03:56.847 "buf_count": 2048 00:03:56.847 } 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "bdev", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "bdev_set_options", 00:03:56.847 "params": { 00:03:56.847 "bdev_io_pool_size": 65535, 00:03:56.847 "bdev_io_cache_size": 256, 00:03:56.847 "bdev_auto_examine": true, 00:03:56.847 "iobuf_small_cache_size": 128, 00:03:56.847 "iobuf_large_cache_size": 16 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "bdev_raid_set_options", 00:03:56.847 "params": { 00:03:56.847 "process_window_size_kb": 1024 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "bdev_iscsi_set_options", 00:03:56.847 "params": { 00:03:56.847 "timeout_sec": 30 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "bdev_nvme_set_options", 00:03:56.847 "params": { 00:03:56.847 "action_on_timeout": "none", 00:03:56.847 "timeout_us": 0, 00:03:56.847 "timeout_admin_us": 0, 00:03:56.847 "keep_alive_timeout_ms": 10000, 00:03:56.847 "arbitration_burst": 0, 00:03:56.847 "low_priority_weight": 0, 00:03:56.847 "medium_priority_weight": 0, 00:03:56.847 "high_priority_weight": 0, 00:03:56.847 "nvme_adminq_poll_period_us": 10000, 00:03:56.847 "nvme_ioq_poll_period_us": 0, 00:03:56.847 "io_queue_requests": 0, 00:03:56.847 "delay_cmd_submit": true, 00:03:56.847 "transport_retry_count": 4, 00:03:56.847 "bdev_retry_count": 3, 00:03:56.847 "transport_ack_timeout": 0, 00:03:56.847 "ctrlr_loss_timeout_sec": 0, 00:03:56.847 "reconnect_delay_sec": 0, 00:03:56.847 "fast_io_fail_timeout_sec": 0, 00:03:56.847 "disable_auto_failback": false, 00:03:56.847 "generate_uuids": false, 00:03:56.847 "transport_tos": 0, 00:03:56.847 "nvme_error_stat": false, 00:03:56.847 "rdma_srq_size": 0, 00:03:56.847 "io_path_stat": false, 00:03:56.847 "allow_accel_sequence": false, 00:03:56.847 "rdma_max_cq_size": 0, 00:03:56.847 "rdma_cm_event_timeout_ms": 0, 00:03:56.847 "dhchap_digests": [ 00:03:56.847 "sha256", 00:03:56.847 "sha384", 00:03:56.847 "sha512" 00:03:56.847 ], 00:03:56.847 "dhchap_dhgroups": [ 00:03:56.847 "null", 00:03:56.847 "ffdhe2048", 00:03:56.847 "ffdhe3072", 00:03:56.847 "ffdhe4096", 00:03:56.847 "ffdhe6144", 00:03:56.847 "ffdhe8192" 00:03:56.847 ] 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "bdev_nvme_set_hotplug", 00:03:56.847 "params": { 00:03:56.847 "period_us": 100000, 00:03:56.847 "enable": false 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "bdev_wait_for_examine" 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "scsi", 00:03:56.847 "config": null 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "scheduler", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "framework_set_scheduler", 00:03:56.847 "params": { 00:03:56.847 "name": "static" 00:03:56.847 } 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "vhost_scsi", 00:03:56.847 "config": [] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "vhost_blk", 00:03:56.847 "config": [] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "ublk", 00:03:56.847 "config": [] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "nbd", 00:03:56.847 "config": [] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "nvmf", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "nvmf_set_config", 00:03:56.847 "params": { 00:03:56.847 "discovery_filter": "match_any", 00:03:56.847 "admin_cmd_passthru": { 00:03:56.847 "identify_ctrlr": false 00:03:56.847 } 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "nvmf_set_max_subsystems", 00:03:56.847 "params": { 00:03:56.847 "max_subsystems": 1024 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "nvmf_set_crdt", 00:03:56.847 "params": { 00:03:56.847 "crdt1": 0, 00:03:56.847 "crdt2": 0, 00:03:56.847 "crdt3": 0 00:03:56.847 } 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "method": "nvmf_create_transport", 00:03:56.847 "params": { 00:03:56.847 "trtype": "TCP", 00:03:56.847 "max_queue_depth": 128, 00:03:56.847 "max_io_qpairs_per_ctrlr": 127, 00:03:56.847 "in_capsule_data_size": 4096, 00:03:56.847 "max_io_size": 131072, 00:03:56.847 "io_unit_size": 131072, 00:03:56.847 "max_aq_depth": 128, 00:03:56.847 "num_shared_buffers": 511, 00:03:56.847 "buf_cache_size": 4294967295, 00:03:56.847 "dif_insert_or_strip": false, 00:03:56.847 "zcopy": false, 00:03:56.847 "c2h_success": true, 00:03:56.847 "sock_priority": 0, 00:03:56.847 "abort_timeout_sec": 1, 00:03:56.847 "ack_timeout": 0, 00:03:56.847 "data_wr_pool_size": 0 00:03:56.847 } 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 }, 00:03:56.847 { 00:03:56.847 "subsystem": "iscsi", 00:03:56.847 "config": [ 00:03:56.847 { 00:03:56.847 "method": "iscsi_set_options", 00:03:56.847 "params": { 00:03:56.847 "node_base": "iqn.2016-06.io.spdk", 00:03:56.847 "max_sessions": 128, 00:03:56.847 "max_connections_per_session": 2, 00:03:56.847 "max_queue_depth": 64, 00:03:56.847 "default_time2wait": 2, 00:03:56.847 "default_time2retain": 20, 00:03:56.847 "first_burst_length": 8192, 00:03:56.847 "immediate_data": true, 00:03:56.847 "allow_duplicated_isid": false, 00:03:56.847 "error_recovery_level": 0, 00:03:56.847 "nop_timeout": 60, 00:03:56.847 "nop_in_interval": 30, 00:03:56.847 "disable_chap": false, 00:03:56.847 "require_chap": false, 00:03:56.847 "mutual_chap": false, 00:03:56.847 "chap_group": 0, 00:03:56.847 "max_large_datain_per_connection": 64, 00:03:56.847 "max_r2t_per_connection": 4, 00:03:56.847 "pdu_pool_size": 36864, 00:03:56.847 "immediate_data_pool_size": 16384, 00:03:56.847 "data_out_pool_size": 2048 00:03:56.847 } 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 } 00:03:56.847 ] 00:03:56.847 } 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 91259 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 91259 ']' 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 91259 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91259 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91259' 00:03:56.847 killing process with pid 91259 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 91259 00:03:56.847 20:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 91259 00:03:57.106 20:02:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=91394 00:03:57.106 20:02:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.106 20:02:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 91394 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 91394 ']' 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 91394 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91394 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91394' 00:04:02.365 killing process with pid 91394 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 91394 00:04:02.365 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 91394 00:04:02.623 20:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.623 20:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.623 00:04:02.623 real 0m6.458s 00:04:02.623 user 0m6.095s 00:04:02.623 sys 0m0.619s 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.624 ************************************ 00:04:02.624 END TEST skip_rpc_with_json 00:04:02.624 ************************************ 00:04:02.624 20:02:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:02.624 20:02:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.624 20:02:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.624 20:02:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.624 ************************************ 00:04:02.624 START TEST skip_rpc_with_delay 00:04:02.624 ************************************ 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:02.624 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:02.883 [2024-05-16 20:02:49.779339] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:02.883 [2024-05-16 20:02:49.779450] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:02.883 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:02.883 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:02.883 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:02.883 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:02.883 00:04:02.883 real 0m0.066s 00:04:02.883 user 0m0.041s 00:04:02.883 sys 0m0.025s 00:04:02.883 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.883 20:02:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:02.883 ************************************ 00:04:02.883 END TEST skip_rpc_with_delay 00:04:02.883 ************************************ 00:04:02.883 20:02:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:02.883 20:02:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:02.883 20:02:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:02.883 20:02:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.883 20:02:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.883 20:02:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.883 ************************************ 00:04:02.883 START TEST exit_on_failed_rpc_init 00:04:02.883 ************************************ 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=92111 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 92111 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 92111 ']' 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:02.883 20:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.883 [2024-05-16 20:02:49.899685] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:02.883 [2024-05-16 20:02:49.899768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92111 ] 00:04:02.883 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.883 [2024-05-16 20:02:49.964045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.142 [2024-05-16 20:02:50.076700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.401 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.401 [2024-05-16 20:02:50.367193] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:03.401 [2024-05-16 20:02:50.367286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92127 ] 00:04:03.401 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.401 [2024-05-16 20:02:50.423704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.401 [2024-05-16 20:02:50.531249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.401 [2024-05-16 20:02:50.531361] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:03.401 [2024-05-16 20:02:50.531387] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:03.401 [2024-05-16 20:02:50.531402] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 92111 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 92111 ']' 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 92111 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92111 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92111' 00:04:03.660 killing process with pid 92111 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 92111 00:04:03.660 20:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 92111 00:04:04.228 00:04:04.228 real 0m1.266s 00:04:04.228 user 0m1.422s 00:04:04.228 sys 0m0.426s 00:04:04.228 20:02:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.228 20:02:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.228 ************************************ 00:04:04.228 END TEST exit_on_failed_rpc_init 00:04:04.228 ************************************ 00:04:04.228 20:02:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.228 00:04:04.228 real 0m13.490s 00:04:04.228 user 0m12.824s 00:04:04.228 sys 0m1.513s 00:04:04.228 20:02:51 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.228 20:02:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.228 ************************************ 00:04:04.228 END TEST skip_rpc 00:04:04.228 ************************************ 00:04:04.228 20:02:51 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.228 20:02:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.228 20:02:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.228 20:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:04.228 ************************************ 00:04:04.228 START TEST rpc_client 00:04:04.228 ************************************ 00:04:04.228 20:02:51 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.228 * Looking for test storage... 00:04:04.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:04.228 20:02:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:04.228 OK 00:04:04.229 20:02:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.229 00:04:04.229 real 0m0.059s 00:04:04.229 user 0m0.023s 00:04:04.229 sys 0m0.041s 00:04:04.229 20:02:51 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.229 20:02:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:04.229 ************************************ 00:04:04.229 END TEST rpc_client 00:04:04.229 ************************************ 00:04:04.229 20:02:51 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.229 20:02:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.229 20:02:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.229 20:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:04.229 ************************************ 00:04:04.229 START TEST json_config 00:04:04.229 ************************************ 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.229 20:02:51 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.229 20:02:51 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.229 20:02:51 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.229 20:02:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.229 20:02:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.229 20:02:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.229 20:02:51 json_config -- paths/export.sh@5 -- # export PATH 00:04:04.229 20:02:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@47 -- # : 0 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:04.229 20:02:51 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:04.229 INFO: JSON configuration test init 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.229 20:02:51 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.229 20:02:51 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.229 20:02:51 json_config -- json_config/common.sh@10 -- # shift 00:04:04.229 20:02:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.229 20:02:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.229 20:02:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.229 20:02:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.229 20:02:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.229 20:02:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=92369 00:04:04.229 20:02:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.229 20:02:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.229 Waiting for target to run... 00:04:04.229 20:02:51 json_config -- json_config/common.sh@25 -- # waitforlisten 92369 /var/tmp/spdk_tgt.sock 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@827 -- # '[' -z 92369 ']' 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:04.229 20:02:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.489 [2024-05-16 20:02:51.399921] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:04.489 [2024-05-16 20:02:51.399997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92369 ] 00:04:04.489 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.056 [2024-05-16 20:02:51.905639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.056 [2024-05-16 20:02:51.994586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.315 20:02:52 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:05.315 20:02:52 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:05.315 20:02:52 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.315 00:04:05.315 20:02:52 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:05.315 20:02:52 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:05.315 20:02:52 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:05.315 20:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.315 20:02:52 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:05.315 20:02:52 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:05.315 20:02:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.315 20:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.315 20:02:52 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.315 20:02:52 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:05.315 20:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:08.603 20:02:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:08.603 20:02:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:08.603 20:02:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:08.603 20:02:55 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:08.603 20:02:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.603 20:02:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:08.861 20:02:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:08.861 20:02:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:08.861 20:02:55 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.861 20:02:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.861 MallocForNvmf0 00:04:09.120 20:02:56 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.120 20:02:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.120 MallocForNvmf1 00:04:09.120 20:02:56 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.120 20:02:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.379 [2024-05-16 20:02:56.481698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.379 20:02:56 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.379 20:02:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.637 20:02:56 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.637 20:02:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.897 20:02:56 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:09.897 20:02:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.154 20:02:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.154 20:02:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.412 [2024-05-16 20:02:57.464434] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:10.412 [2024-05-16 20:02:57.465010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.412 20:02:57 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:10.412 20:02:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.412 20:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.412 20:02:57 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:10.412 20:02:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.412 20:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.412 20:02:57 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:10.412 20:02:57 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.413 20:02:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.671 MallocBdevForConfigChangeCheck 00:04:10.671 20:02:57 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:10.671 20:02:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.671 20:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.671 20:02:57 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:10.671 20:02:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.237 20:02:58 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:11.237 INFO: shutting down applications... 00:04:11.237 20:02:58 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:11.237 20:02:58 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:11.237 20:02:58 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:11.237 20:02:58 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:12.612 Calling clear_iscsi_subsystem 00:04:12.612 Calling clear_nvmf_subsystem 00:04:12.612 Calling clear_nbd_subsystem 00:04:12.612 Calling clear_ublk_subsystem 00:04:12.612 Calling clear_vhost_blk_subsystem 00:04:12.612 Calling clear_vhost_scsi_subsystem 00:04:12.612 Calling clear_bdev_subsystem 00:04:12.612 20:02:59 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:12.612 20:02:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:12.612 20:02:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:12.612 20:02:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.612 20:02:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:12.612 20:02:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:13.179 20:03:00 json_config -- json_config/json_config.sh@345 -- # break 00:04:13.179 20:03:00 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:13.179 20:03:00 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:13.179 20:03:00 json_config -- json_config/common.sh@31 -- # local app=target 00:04:13.179 20:03:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.179 20:03:00 json_config -- json_config/common.sh@35 -- # [[ -n 92369 ]] 00:04:13.179 20:03:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 92369 00:04:13.179 [2024-05-16 20:03:00.116298] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:13.179 20:03:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.179 20:03:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.179 20:03:00 json_config -- json_config/common.sh@41 -- # kill -0 92369 00:04:13.179 20:03:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:13.748 20:03:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:13.748 20:03:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.748 20:03:00 json_config -- json_config/common.sh@41 -- # kill -0 92369 00:04:13.748 20:03:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:13.748 20:03:00 json_config -- json_config/common.sh@43 -- # break 00:04:13.748 20:03:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:13.748 20:03:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:13.748 SPDK target shutdown done 00:04:13.748 20:03:00 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:13.748 INFO: relaunching applications... 00:04:13.748 20:03:00 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.748 20:03:00 json_config -- json_config/common.sh@9 -- # local app=target 00:04:13.748 20:03:00 json_config -- json_config/common.sh@10 -- # shift 00:04:13.748 20:03:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.748 20:03:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.748 20:03:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.748 20:03:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.748 20:03:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.748 20:03:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=93600 00:04:13.748 20:03:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.748 20:03:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.748 Waiting for target to run... 00:04:13.748 20:03:00 json_config -- json_config/common.sh@25 -- # waitforlisten 93600 /var/tmp/spdk_tgt.sock 00:04:13.748 20:03:00 json_config -- common/autotest_common.sh@827 -- # '[' -z 93600 ']' 00:04:13.748 20:03:00 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.748 20:03:00 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:13.748 20:03:00 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.748 20:03:00 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:13.748 20:03:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.748 [2024-05-16 20:03:00.667996] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:13.748 [2024-05-16 20:03:00.668089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93600 ] 00:04:13.748 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.007 [2024-05-16 20:03:01.011296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.007 [2024-05-16 20:03:01.114506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.297 [2024-05-16 20:03:04.148199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.297 [2024-05-16 20:03:04.180170] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:17.297 [2024-05-16 20:03:04.180608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.297 20:03:04 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:17.297 20:03:04 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:17.297 20:03:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.297 00:04:17.297 20:03:04 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:17.297 20:03:04 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:17.297 INFO: Checking if target configuration is the same... 00:04:17.297 20:03:04 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.297 20:03:04 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:17.297 20:03:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.297 + '[' 2 -ne 2 ']' 00:04:17.297 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.297 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.297 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.297 +++ basename /dev/fd/62 00:04:17.297 ++ mktemp /tmp/62.XXX 00:04:17.297 + tmp_file_1=/tmp/62.JCh 00:04:17.297 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.297 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.297 + tmp_file_2=/tmp/spdk_tgt_config.json.aL7 00:04:17.297 + ret=0 00:04:17.297 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.555 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.555 + diff -u /tmp/62.JCh /tmp/spdk_tgt_config.json.aL7 00:04:17.555 + echo 'INFO: JSON config files are the same' 00:04:17.555 INFO: JSON config files are the same 00:04:17.555 + rm /tmp/62.JCh /tmp/spdk_tgt_config.json.aL7 00:04:17.555 + exit 0 00:04:17.555 20:03:04 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:17.555 20:03:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:17.555 INFO: changing configuration and checking if this can be detected... 00:04:17.555 20:03:04 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.555 20:03:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.812 20:03:04 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.812 20:03:04 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:17.812 20:03:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.812 + '[' 2 -ne 2 ']' 00:04:17.812 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.812 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.812 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.812 +++ basename /dev/fd/62 00:04:17.812 ++ mktemp /tmp/62.XXX 00:04:17.812 + tmp_file_1=/tmp/62.YoQ 00:04:17.812 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.812 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.812 + tmp_file_2=/tmp/spdk_tgt_config.json.ZJb 00:04:17.812 + ret=0 00:04:17.812 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.377 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.377 + diff -u /tmp/62.YoQ /tmp/spdk_tgt_config.json.ZJb 00:04:18.377 + ret=1 00:04:18.377 + echo '=== Start of file: /tmp/62.YoQ ===' 00:04:18.377 + cat /tmp/62.YoQ 00:04:18.377 + echo '=== End of file: /tmp/62.YoQ ===' 00:04:18.377 + echo '' 00:04:18.377 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZJb ===' 00:04:18.377 + cat /tmp/spdk_tgt_config.json.ZJb 00:04:18.377 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZJb ===' 00:04:18.377 + echo '' 00:04:18.377 + rm /tmp/62.YoQ /tmp/spdk_tgt_config.json.ZJb 00:04:18.377 + exit 1 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:18.377 INFO: configuration change detected. 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@317 -- # [[ -n 93600 ]] 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.377 20:03:05 json_config -- json_config/json_config.sh@323 -- # killprocess 93600 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@946 -- # '[' -z 93600 ']' 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@950 -- # kill -0 93600 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@951 -- # uname 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93600 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93600' 00:04:18.377 killing process with pid 93600 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@965 -- # kill 93600 00:04:18.377 [2024-05-16 20:03:05.376600] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:18.377 20:03:05 json_config -- common/autotest_common.sh@970 -- # wait 93600 00:04:20.277 20:03:06 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.277 20:03:06 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:20.277 20:03:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.277 20:03:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.277 20:03:06 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:20.277 20:03:06 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:20.277 INFO: Success 00:04:20.277 00:04:20.277 real 0m15.669s 00:04:20.277 user 0m17.372s 00:04:20.277 sys 0m1.968s 00:04:20.277 20:03:06 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:20.277 20:03:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.278 ************************************ 00:04:20.278 END TEST json_config 00:04:20.278 ************************************ 00:04:20.278 20:03:06 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.278 20:03:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:20.278 20:03:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.278 20:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:20.278 ************************************ 00:04:20.278 START TEST json_config_extra_key 00:04:20.278 ************************************ 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.278 20:03:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.278 20:03:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.278 20:03:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.278 20:03:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.278 20:03:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.278 20:03:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.278 20:03:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:20.278 20:03:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:20.278 20:03:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:20.278 INFO: launching applications... 00:04:20.278 20:03:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=94578 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.278 Waiting for target to run... 00:04:20.278 20:03:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 94578 /var/tmp/spdk_tgt.sock 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 94578 ']' 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:20.278 20:03:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.278 [2024-05-16 20:03:07.128397] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:20.278 [2024-05-16 20:03:07.128477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94578 ] 00:04:20.278 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.540 [2024-05-16 20:03:07.482031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.540 [2024-05-16 20:03:07.569706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.105 20:03:08 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:21.105 20:03:08 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:21.105 00:04:21.105 20:03:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:21.105 INFO: shutting down applications... 00:04:21.105 20:03:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 94578 ]] 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 94578 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 94578 00:04:21.105 20:03:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 94578 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.672 20:03:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.672 SPDK target shutdown done 00:04:21.672 20:03:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:21.672 Success 00:04:21.672 00:04:21.672 real 0m1.556s 00:04:21.672 user 0m1.508s 00:04:21.672 sys 0m0.433s 00:04:21.672 20:03:08 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.672 20:03:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.672 ************************************ 00:04:21.672 END TEST json_config_extra_key 00:04:21.672 ************************************ 00:04:21.672 20:03:08 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.672 20:03:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.672 20:03:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.672 20:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:21.672 ************************************ 00:04:21.672 START TEST alias_rpc 00:04:21.672 ************************************ 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.672 * Looking for test storage... 00:04:21.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:21.672 20:03:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:21.672 20:03:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=95353 00:04:21.672 20:03:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.672 20:03:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 95353 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 95353 ']' 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:21.672 20:03:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.672 [2024-05-16 20:03:08.736243] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:21.672 [2024-05-16 20:03:08.736336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95353 ] 00:04:21.672 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.672 [2024-05-16 20:03:08.795493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.930 [2024-05-16 20:03:08.904645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.188 20:03:09 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:22.188 20:03:09 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:22.188 20:03:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:22.446 20:03:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 95353 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 95353 ']' 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 95353 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95353 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95353' 00:04:22.446 killing process with pid 95353 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@965 -- # kill 95353 00:04:22.446 20:03:09 alias_rpc -- common/autotest_common.sh@970 -- # wait 95353 00:04:22.705 00:04:22.705 real 0m1.186s 00:04:22.705 user 0m1.281s 00:04:22.705 sys 0m0.386s 00:04:22.705 20:03:09 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.705 20:03:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.705 ************************************ 00:04:22.705 END TEST alias_rpc 00:04:22.705 ************************************ 00:04:22.705 20:03:09 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:22.705 20:03:09 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.705 20:03:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.705 20:03:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.705 20:03:09 -- common/autotest_common.sh@10 -- # set +x 00:04:22.964 ************************************ 00:04:22.964 START TEST spdkcli_tcp 00:04:22.964 ************************************ 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.964 * Looking for test storage... 00:04:22.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=95575 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:22.964 20:03:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 95575 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 95575 ']' 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:22.964 20:03:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.964 [2024-05-16 20:03:09.965195] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:22.964 [2024-05-16 20:03:09.965279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95575 ] 00:04:22.964 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.964 [2024-05-16 20:03:10.026052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.222 [2024-05-16 20:03:10.141336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.222 [2024-05-16 20:03:10.141340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.480 20:03:10 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:23.480 20:03:10 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:23.480 20:03:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=95589 00:04:23.481 20:03:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:23.481 20:03:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:23.481 [ 00:04:23.481 "bdev_malloc_delete", 00:04:23.481 "bdev_malloc_create", 00:04:23.481 "bdev_null_resize", 00:04:23.481 "bdev_null_delete", 00:04:23.481 "bdev_null_create", 00:04:23.481 "bdev_nvme_cuse_unregister", 00:04:23.481 "bdev_nvme_cuse_register", 00:04:23.481 "bdev_opal_new_user", 00:04:23.481 "bdev_opal_set_lock_state", 00:04:23.481 "bdev_opal_delete", 00:04:23.481 "bdev_opal_get_info", 00:04:23.481 "bdev_opal_create", 00:04:23.481 "bdev_nvme_opal_revert", 00:04:23.481 "bdev_nvme_opal_init", 00:04:23.481 "bdev_nvme_send_cmd", 00:04:23.481 "bdev_nvme_get_path_iostat", 00:04:23.481 "bdev_nvme_get_mdns_discovery_info", 00:04:23.481 "bdev_nvme_stop_mdns_discovery", 00:04:23.481 "bdev_nvme_start_mdns_discovery", 00:04:23.481 "bdev_nvme_set_multipath_policy", 00:04:23.481 "bdev_nvme_set_preferred_path", 00:04:23.481 "bdev_nvme_get_io_paths", 00:04:23.481 "bdev_nvme_remove_error_injection", 00:04:23.481 "bdev_nvme_add_error_injection", 00:04:23.481 "bdev_nvme_get_discovery_info", 00:04:23.481 "bdev_nvme_stop_discovery", 00:04:23.481 "bdev_nvme_start_discovery", 00:04:23.481 "bdev_nvme_get_controller_health_info", 00:04:23.481 "bdev_nvme_disable_controller", 00:04:23.481 "bdev_nvme_enable_controller", 00:04:23.481 "bdev_nvme_reset_controller", 00:04:23.481 "bdev_nvme_get_transport_statistics", 00:04:23.481 "bdev_nvme_apply_firmware", 00:04:23.481 "bdev_nvme_detach_controller", 00:04:23.481 "bdev_nvme_get_controllers", 00:04:23.481 "bdev_nvme_attach_controller", 00:04:23.481 "bdev_nvme_set_hotplug", 00:04:23.481 "bdev_nvme_set_options", 00:04:23.481 "bdev_passthru_delete", 00:04:23.481 "bdev_passthru_create", 00:04:23.481 "bdev_lvol_set_parent_bdev", 00:04:23.481 "bdev_lvol_set_parent", 00:04:23.481 "bdev_lvol_check_shallow_copy", 00:04:23.481 "bdev_lvol_start_shallow_copy", 00:04:23.481 "bdev_lvol_grow_lvstore", 00:04:23.481 "bdev_lvol_get_lvols", 00:04:23.481 "bdev_lvol_get_lvstores", 00:04:23.481 "bdev_lvol_delete", 00:04:23.481 "bdev_lvol_set_read_only", 00:04:23.481 "bdev_lvol_resize", 00:04:23.481 "bdev_lvol_decouple_parent", 00:04:23.481 "bdev_lvol_inflate", 00:04:23.481 "bdev_lvol_rename", 00:04:23.481 "bdev_lvol_clone_bdev", 00:04:23.481 "bdev_lvol_clone", 00:04:23.481 "bdev_lvol_snapshot", 00:04:23.481 "bdev_lvol_create", 00:04:23.481 "bdev_lvol_delete_lvstore", 00:04:23.481 "bdev_lvol_rename_lvstore", 00:04:23.481 "bdev_lvol_create_lvstore", 00:04:23.481 "bdev_raid_set_options", 00:04:23.481 "bdev_raid_remove_base_bdev", 00:04:23.481 "bdev_raid_add_base_bdev", 00:04:23.481 "bdev_raid_delete", 00:04:23.481 "bdev_raid_create", 00:04:23.481 "bdev_raid_get_bdevs", 00:04:23.481 "bdev_error_inject_error", 00:04:23.481 "bdev_error_delete", 00:04:23.481 "bdev_error_create", 00:04:23.481 "bdev_split_delete", 00:04:23.481 "bdev_split_create", 00:04:23.481 "bdev_delay_delete", 00:04:23.481 "bdev_delay_create", 00:04:23.481 "bdev_delay_update_latency", 00:04:23.481 "bdev_zone_block_delete", 00:04:23.481 "bdev_zone_block_create", 00:04:23.481 "blobfs_create", 00:04:23.481 "blobfs_detect", 00:04:23.481 "blobfs_set_cache_size", 00:04:23.481 "bdev_aio_delete", 00:04:23.481 "bdev_aio_rescan", 00:04:23.481 "bdev_aio_create", 00:04:23.481 "bdev_ftl_set_property", 00:04:23.481 "bdev_ftl_get_properties", 00:04:23.481 "bdev_ftl_get_stats", 00:04:23.481 "bdev_ftl_unmap", 00:04:23.481 "bdev_ftl_unload", 00:04:23.481 "bdev_ftl_delete", 00:04:23.481 "bdev_ftl_load", 00:04:23.481 "bdev_ftl_create", 00:04:23.481 "bdev_virtio_attach_controller", 00:04:23.481 "bdev_virtio_scsi_get_devices", 00:04:23.481 "bdev_virtio_detach_controller", 00:04:23.481 "bdev_virtio_blk_set_hotplug", 00:04:23.481 "bdev_iscsi_delete", 00:04:23.481 "bdev_iscsi_create", 00:04:23.481 "bdev_iscsi_set_options", 00:04:23.481 "accel_error_inject_error", 00:04:23.481 "ioat_scan_accel_module", 00:04:23.481 "dsa_scan_accel_module", 00:04:23.481 "iaa_scan_accel_module", 00:04:23.481 "vfu_virtio_create_scsi_endpoint", 00:04:23.481 "vfu_virtio_scsi_remove_target", 00:04:23.481 "vfu_virtio_scsi_add_target", 00:04:23.481 "vfu_virtio_create_blk_endpoint", 00:04:23.481 "vfu_virtio_delete_endpoint", 00:04:23.481 "keyring_file_remove_key", 00:04:23.481 "keyring_file_add_key", 00:04:23.481 "iscsi_get_histogram", 00:04:23.481 "iscsi_enable_histogram", 00:04:23.481 "iscsi_set_options", 00:04:23.481 "iscsi_get_auth_groups", 00:04:23.481 "iscsi_auth_group_remove_secret", 00:04:23.481 "iscsi_auth_group_add_secret", 00:04:23.481 "iscsi_delete_auth_group", 00:04:23.481 "iscsi_create_auth_group", 00:04:23.481 "iscsi_set_discovery_auth", 00:04:23.481 "iscsi_get_options", 00:04:23.481 "iscsi_target_node_request_logout", 00:04:23.481 "iscsi_target_node_set_redirect", 00:04:23.481 "iscsi_target_node_set_auth", 00:04:23.481 "iscsi_target_node_add_lun", 00:04:23.481 "iscsi_get_stats", 00:04:23.481 "iscsi_get_connections", 00:04:23.481 "iscsi_portal_group_set_auth", 00:04:23.481 "iscsi_start_portal_group", 00:04:23.481 "iscsi_delete_portal_group", 00:04:23.481 "iscsi_create_portal_group", 00:04:23.481 "iscsi_get_portal_groups", 00:04:23.481 "iscsi_delete_target_node", 00:04:23.481 "iscsi_target_node_remove_pg_ig_maps", 00:04:23.481 "iscsi_target_node_add_pg_ig_maps", 00:04:23.481 "iscsi_create_target_node", 00:04:23.481 "iscsi_get_target_nodes", 00:04:23.481 "iscsi_delete_initiator_group", 00:04:23.481 "iscsi_initiator_group_remove_initiators", 00:04:23.481 "iscsi_initiator_group_add_initiators", 00:04:23.481 "iscsi_create_initiator_group", 00:04:23.481 "iscsi_get_initiator_groups", 00:04:23.481 "nvmf_set_crdt", 00:04:23.481 "nvmf_set_config", 00:04:23.481 "nvmf_set_max_subsystems", 00:04:23.481 "nvmf_stop_mdns_prr", 00:04:23.481 "nvmf_publish_mdns_prr", 00:04:23.481 "nvmf_subsystem_get_listeners", 00:04:23.481 "nvmf_subsystem_get_qpairs", 00:04:23.481 "nvmf_subsystem_get_controllers", 00:04:23.481 "nvmf_get_stats", 00:04:23.481 "nvmf_get_transports", 00:04:23.481 "nvmf_create_transport", 00:04:23.481 "nvmf_get_targets", 00:04:23.481 "nvmf_delete_target", 00:04:23.481 "nvmf_create_target", 00:04:23.481 "nvmf_subsystem_allow_any_host", 00:04:23.481 "nvmf_subsystem_remove_host", 00:04:23.481 "nvmf_subsystem_add_host", 00:04:23.481 "nvmf_ns_remove_host", 00:04:23.481 "nvmf_ns_add_host", 00:04:23.481 "nvmf_subsystem_remove_ns", 00:04:23.481 "nvmf_subsystem_add_ns", 00:04:23.481 "nvmf_subsystem_listener_set_ana_state", 00:04:23.481 "nvmf_discovery_get_referrals", 00:04:23.481 "nvmf_discovery_remove_referral", 00:04:23.481 "nvmf_discovery_add_referral", 00:04:23.481 "nvmf_subsystem_remove_listener", 00:04:23.481 "nvmf_subsystem_add_listener", 00:04:23.481 "nvmf_delete_subsystem", 00:04:23.481 "nvmf_create_subsystem", 00:04:23.481 "nvmf_get_subsystems", 00:04:23.481 "env_dpdk_get_mem_stats", 00:04:23.481 "nbd_get_disks", 00:04:23.481 "nbd_stop_disk", 00:04:23.481 "nbd_start_disk", 00:04:23.481 "ublk_recover_disk", 00:04:23.481 "ublk_get_disks", 00:04:23.481 "ublk_stop_disk", 00:04:23.481 "ublk_start_disk", 00:04:23.481 "ublk_destroy_target", 00:04:23.481 "ublk_create_target", 00:04:23.481 "virtio_blk_create_transport", 00:04:23.481 "virtio_blk_get_transports", 00:04:23.481 "vhost_controller_set_coalescing", 00:04:23.481 "vhost_get_controllers", 00:04:23.481 "vhost_delete_controller", 00:04:23.481 "vhost_create_blk_controller", 00:04:23.481 "vhost_scsi_controller_remove_target", 00:04:23.481 "vhost_scsi_controller_add_target", 00:04:23.481 "vhost_start_scsi_controller", 00:04:23.481 "vhost_create_scsi_controller", 00:04:23.481 "thread_set_cpumask", 00:04:23.481 "framework_get_scheduler", 00:04:23.481 "framework_set_scheduler", 00:04:23.481 "framework_get_reactors", 00:04:23.481 "thread_get_io_channels", 00:04:23.481 "thread_get_pollers", 00:04:23.481 "thread_get_stats", 00:04:23.481 "framework_monitor_context_switch", 00:04:23.481 "spdk_kill_instance", 00:04:23.481 "log_enable_timestamps", 00:04:23.481 "log_get_flags", 00:04:23.481 "log_clear_flag", 00:04:23.481 "log_set_flag", 00:04:23.481 "log_get_level", 00:04:23.481 "log_set_level", 00:04:23.481 "log_get_print_level", 00:04:23.481 "log_set_print_level", 00:04:23.481 "framework_enable_cpumask_locks", 00:04:23.481 "framework_disable_cpumask_locks", 00:04:23.481 "framework_wait_init", 00:04:23.481 "framework_start_init", 00:04:23.481 "scsi_get_devices", 00:04:23.481 "bdev_get_histogram", 00:04:23.481 "bdev_enable_histogram", 00:04:23.481 "bdev_set_qos_limit", 00:04:23.481 "bdev_set_qd_sampling_period", 00:04:23.481 "bdev_get_bdevs", 00:04:23.481 "bdev_reset_iostat", 00:04:23.481 "bdev_get_iostat", 00:04:23.481 "bdev_examine", 00:04:23.481 "bdev_wait_for_examine", 00:04:23.481 "bdev_set_options", 00:04:23.481 "notify_get_notifications", 00:04:23.481 "notify_get_types", 00:04:23.481 "accel_get_stats", 00:04:23.481 "accel_set_options", 00:04:23.481 "accel_set_driver", 00:04:23.481 "accel_crypto_key_destroy", 00:04:23.481 "accel_crypto_keys_get", 00:04:23.481 "accel_crypto_key_create", 00:04:23.481 "accel_assign_opc", 00:04:23.481 "accel_get_module_info", 00:04:23.481 "accel_get_opc_assignments", 00:04:23.481 "vmd_rescan", 00:04:23.481 "vmd_remove_device", 00:04:23.481 "vmd_enable", 00:04:23.481 "sock_get_default_impl", 00:04:23.481 "sock_set_default_impl", 00:04:23.481 "sock_impl_set_options", 00:04:23.481 "sock_impl_get_options", 00:04:23.481 "iobuf_get_stats", 00:04:23.481 "iobuf_set_options", 00:04:23.481 "keyring_get_keys", 00:04:23.481 "framework_get_pci_devices", 00:04:23.481 "framework_get_config", 00:04:23.481 "framework_get_subsystems", 00:04:23.482 "vfu_tgt_set_base_path", 00:04:23.482 "trace_get_info", 00:04:23.482 "trace_get_tpoint_group_mask", 00:04:23.482 "trace_disable_tpoint_group", 00:04:23.482 "trace_enable_tpoint_group", 00:04:23.482 "trace_clear_tpoint_mask", 00:04:23.482 "trace_set_tpoint_mask", 00:04:23.482 "spdk_get_version", 00:04:23.482 "rpc_get_methods" 00:04:23.482 ] 00:04:23.739 20:03:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.739 20:03:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:23.739 20:03:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 95575 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 95575 ']' 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 95575 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95575 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95575' 00:04:23.739 killing process with pid 95575 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 95575 00:04:23.739 20:03:10 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 95575 00:04:23.998 00:04:23.998 real 0m1.215s 00:04:23.998 user 0m2.159s 00:04:23.998 sys 0m0.416s 00:04:23.998 20:03:11 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.998 20:03:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.998 ************************************ 00:04:23.998 END TEST spdkcli_tcp 00:04:23.998 ************************************ 00:04:23.998 20:03:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.998 20:03:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.998 20:03:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.998 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:04:23.998 ************************************ 00:04:23.998 START TEST dpdk_mem_utility 00:04:23.998 ************************************ 00:04:23.998 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.258 * Looking for test storage... 00:04:24.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:24.258 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.258 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=95785 00:04:24.258 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 95785 00:04:24.258 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.258 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 95785 ']' 00:04:24.258 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.258 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:24.258 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.258 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:24.258 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.258 [2024-05-16 20:03:11.231722] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:24.258 [2024-05-16 20:03:11.231813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95785 ] 00:04:24.258 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.258 [2024-05-16 20:03:11.287012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.258 [2024-05-16 20:03:11.392028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.516 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:24.516 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:24.516 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:24.516 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:24.516 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.516 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.516 { 00:04:24.516 "filename": "/tmp/spdk_mem_dump.txt" 00:04:24.516 } 00:04:24.516 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.516 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.774 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:24.774 1 heaps totaling size 814.000000 MiB 00:04:24.774 size: 814.000000 MiB heap id: 0 00:04:24.774 end heaps---------- 00:04:24.774 8 mempools totaling size 598.116089 MiB 00:04:24.774 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:24.774 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:24.774 size: 84.521057 MiB name: bdev_io_95785 00:04:24.774 size: 51.011292 MiB name: evtpool_95785 00:04:24.774 size: 50.003479 MiB name: msgpool_95785 00:04:24.774 size: 21.763794 MiB name: PDU_Pool 00:04:24.774 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:24.774 size: 0.026123 MiB name: Session_Pool 00:04:24.774 end mempools------- 00:04:24.774 6 memzones totaling size 4.142822 MiB 00:04:24.774 size: 1.000366 MiB name: RG_ring_0_95785 00:04:24.774 size: 1.000366 MiB name: RG_ring_1_95785 00:04:24.774 size: 1.000366 MiB name: RG_ring_4_95785 00:04:24.774 size: 1.000366 MiB name: RG_ring_5_95785 00:04:24.774 size: 0.125366 MiB name: RG_ring_2_95785 00:04:24.774 size: 0.015991 MiB name: RG_ring_3_95785 00:04:24.774 end memzones------- 00:04:24.774 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:24.774 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:24.774 list of free elements. size: 12.519348 MiB 00:04:24.774 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:24.774 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:24.774 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:24.774 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:24.774 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:24.774 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:24.774 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:24.774 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:24.774 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:24.774 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:24.774 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:24.774 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:24.774 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:24.774 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:24.774 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:24.774 list of standard malloc elements. size: 199.218079 MiB 00:04:24.774 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:24.774 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:24.774 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:24.774 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:24.774 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:24.774 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:24.774 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:24.774 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:24.774 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:24.774 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:24.774 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:24.774 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:24.774 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:24.774 list of memzone associated elements. size: 602.262573 MiB 00:04:24.774 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:24.775 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:24.775 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:24.775 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:24.775 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:24.775 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_95785_0 00:04:24.775 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:24.775 associated memzone info: size: 48.002930 MiB name: MP_evtpool_95785_0 00:04:24.775 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:24.775 associated memzone info: size: 48.002930 MiB name: MP_msgpool_95785_0 00:04:24.775 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:24.775 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:24.775 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:24.775 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:24.775 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:24.775 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_95785 00:04:24.775 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:24.775 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_95785 00:04:24.775 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:24.775 associated memzone info: size: 1.007996 MiB name: MP_evtpool_95785 00:04:24.775 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:24.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:24.775 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:24.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:24.775 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:24.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:24.775 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:24.775 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:24.775 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:24.775 associated memzone info: size: 1.000366 MiB name: RG_ring_0_95785 00:04:24.775 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:24.775 associated memzone info: size: 1.000366 MiB name: RG_ring_1_95785 00:04:24.775 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:24.775 associated memzone info: size: 1.000366 MiB name: RG_ring_4_95785 00:04:24.775 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:24.775 associated memzone info: size: 1.000366 MiB name: RG_ring_5_95785 00:04:24.775 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:24.775 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_95785 00:04:24.775 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:24.775 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:24.775 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:24.775 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:24.775 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:24.775 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:24.775 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:24.775 associated memzone info: size: 0.125366 MiB name: RG_ring_2_95785 00:04:24.775 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:24.775 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:24.775 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:24.775 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:24.775 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:24.775 associated memzone info: size: 0.015991 MiB name: RG_ring_3_95785 00:04:24.775 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:24.775 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:24.775 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:24.775 associated memzone info: size: 0.000183 MiB name: MP_msgpool_95785 00:04:24.775 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:24.775 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_95785 00:04:24.775 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:24.775 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:24.775 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:24.775 20:03:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 95785 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 95785 ']' 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 95785 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95785 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95785' 00:04:24.775 killing process with pid 95785 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 95785 00:04:24.775 20:03:11 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 95785 00:04:25.034 00:04:25.034 real 0m1.034s 00:04:25.034 user 0m1.025s 00:04:25.034 sys 0m0.350s 00:04:25.034 20:03:12 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.034 20:03:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.034 ************************************ 00:04:25.034 END TEST dpdk_mem_utility 00:04:25.034 ************************************ 00:04:25.293 20:03:12 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.293 20:03:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.293 20:03:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.293 20:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.293 ************************************ 00:04:25.293 START TEST event 00:04:25.293 ************************************ 00:04:25.293 20:03:12 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.293 * Looking for test storage... 00:04:25.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:25.293 20:03:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:25.293 20:03:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:25.293 20:03:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.293 20:03:12 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:25.293 20:03:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.293 20:03:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.293 ************************************ 00:04:25.293 START TEST event_perf 00:04:25.293 ************************************ 00:04:25.293 20:03:12 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.293 Running I/O for 1 seconds...[2024-05-16 20:03:12.315945] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:25.293 [2024-05-16 20:03:12.316016] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95973 ] 00:04:25.293 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.293 [2024-05-16 20:03:12.375528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.552 [2024-05-16 20:03:12.488310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.552 [2024-05-16 20:03:12.488404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.552 [2024-05-16 20:03:12.488469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.552 [2024-05-16 20:03:12.488472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.488 Running I/O for 1 seconds... 00:04:26.488 lcore 0: 237734 00:04:26.488 lcore 1: 237732 00:04:26.488 lcore 2: 237732 00:04:26.488 lcore 3: 237734 00:04:26.488 done. 00:04:26.488 00:04:26.488 real 0m1.294s 00:04:26.488 user 0m4.199s 00:04:26.488 sys 0m0.088s 00:04:26.488 20:03:13 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.488 20:03:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.488 ************************************ 00:04:26.488 END TEST event_perf 00:04:26.488 ************************************ 00:04:26.488 20:03:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.488 20:03:13 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:26.488 20:03:13 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.488 20:03:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.745 ************************************ 00:04:26.745 START TEST event_reactor 00:04:26.745 ************************************ 00:04:26.745 20:03:13 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.745 [2024-05-16 20:03:13.666490] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:26.745 [2024-05-16 20:03:13.666554] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96144 ] 00:04:26.745 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.745 [2024-05-16 20:03:13.722264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.745 [2024-05-16 20:03:13.825549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.119 test_start 00:04:28.119 oneshot 00:04:28.119 tick 100 00:04:28.119 tick 100 00:04:28.119 tick 250 00:04:28.119 tick 100 00:04:28.119 tick 100 00:04:28.119 tick 250 00:04:28.119 tick 100 00:04:28.119 tick 500 00:04:28.119 tick 100 00:04:28.119 tick 100 00:04:28.119 tick 250 00:04:28.119 tick 100 00:04:28.119 tick 100 00:04:28.119 test_end 00:04:28.119 00:04:28.119 real 0m1.279s 00:04:28.119 user 0m1.210s 00:04:28.119 sys 0m0.064s 00:04:28.119 20:03:14 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.119 20:03:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:28.119 ************************************ 00:04:28.119 END TEST event_reactor 00:04:28.119 ************************************ 00:04:28.119 20:03:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.119 20:03:14 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:28.119 20:03:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.119 20:03:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.119 ************************************ 00:04:28.119 START TEST event_reactor_perf 00:04:28.119 ************************************ 00:04:28.119 20:03:14 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.119 [2024-05-16 20:03:14.997712] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:28.119 [2024-05-16 20:03:14.997779] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96296 ] 00:04:28.119 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.119 [2024-05-16 20:03:15.056176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.119 [2024-05-16 20:03:15.159231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.495 test_start 00:04:29.495 test_end 00:04:29.495 Performance: 437106 events per second 00:04:29.495 00:04:29.495 real 0m1.282s 00:04:29.495 user 0m1.204s 00:04:29.495 sys 0m0.072s 00:04:29.495 20:03:16 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.495 20:03:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.495 ************************************ 00:04:29.496 END TEST event_reactor_perf 00:04:29.496 ************************************ 00:04:29.496 20:03:16 event -- event/event.sh@49 -- # uname -s 00:04:29.496 20:03:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:29.496 20:03:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.496 20:03:16 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.496 20:03:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.496 20:03:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.496 ************************************ 00:04:29.496 START TEST event_scheduler 00:04:29.496 ************************************ 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.496 * Looking for test storage... 00:04:29.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:29.496 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:29.496 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=96484 00:04:29.496 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:29.496 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.496 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 96484 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 96484 ']' 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.496 [2024-05-16 20:03:16.409413] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:29.496 [2024-05-16 20:03:16.409499] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96484 ] 00:04:29.496 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.496 [2024-05-16 20:03:16.476052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.496 [2024-05-16 20:03:16.593768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.496 [2024-05-16 20:03:16.593837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.496 [2024-05-16 20:03:16.593894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.496 [2024-05-16 20:03:16.593898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:29.496 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.496 20:03:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.496 POWER: Env isn't set yet! 00:04:29.496 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:29.768 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:29.768 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:29.768 POWER: Cannot get available frequencies of lcore 0 00:04:29.768 POWER: Attempting to initialise PSTAT power management... 00:04:29.768 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:29.768 POWER: Initialized successfully for lcore 0 power management 00:04:29.768 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:29.768 POWER: Initialized successfully for lcore 1 power management 00:04:29.768 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:29.768 POWER: Initialized successfully for lcore 2 power management 00:04:29.768 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:29.768 POWER: Initialized successfully for lcore 3 power management 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 [2024-05-16 20:03:16.769621] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 ************************************ 00:04:29.768 START TEST scheduler_create_thread 00:04:29.768 ************************************ 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 2 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 3 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 4 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 5 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 6 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.768 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.768 7 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 8 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 9 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 10 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.769 20:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.335 20:03:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:30.335 20:03:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:30.335 20:03:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.335 20:03:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.711 20:03:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.711 20:03:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:31.711 20:03:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:31.711 20:03:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.711 20:03:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.086 20:03:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.086 00:04:33.086 real 0m3.099s 00:04:33.086 user 0m0.010s 00:04:33.086 sys 0m0.004s 00:04:33.086 20:03:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.086 20:03:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.086 ************************************ 00:04:33.086 END TEST scheduler_create_thread 00:04:33.086 ************************************ 00:04:33.086 20:03:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:33.086 20:03:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 96484 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 96484 ']' 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 96484 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96484 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96484' 00:04:33.086 killing process with pid 96484 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 96484 00:04:33.086 20:03:19 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 96484 00:04:33.346 [2024-05-16 20:03:20.273776] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:33.346 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:33.346 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:33.346 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:33.346 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:33.346 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:33.346 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:33.346 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:33.346 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:33.605 00:04:33.605 real 0m4.234s 00:04:33.605 user 0m6.822s 00:04:33.605 sys 0m0.328s 00:04:33.605 20:03:20 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.605 20:03:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.605 ************************************ 00:04:33.605 END TEST event_scheduler 00:04:33.605 ************************************ 00:04:33.605 20:03:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:33.605 20:03:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:33.605 20:03:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.605 20:03:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.605 20:03:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.605 ************************************ 00:04:33.605 START TEST app_repeat 00:04:33.605 ************************************ 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=97060 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 97060' 00:04:33.605 Process app_repeat pid: 97060 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:33.605 spdk_app_start Round 0 00:04:33.605 20:03:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 97060 /var/tmp/spdk-nbd.sock 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 97060 ']' 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:33.605 20:03:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.605 [2024-05-16 20:03:20.634134] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:33.605 [2024-05-16 20:03:20.634213] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97060 ] 00:04:33.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.605 [2024-05-16 20:03:20.691511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.864 [2024-05-16 20:03:20.799056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.864 [2024-05-16 20:03:20.799060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.864 20:03:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:33.864 20:03:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:33.864 20:03:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.123 Malloc0 00:04:34.123 20:03:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.382 Malloc1 00:04:34.382 20:03:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.382 20:03:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.640 /dev/nbd0 00:04:34.640 20:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.640 20:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.640 1+0 records in 00:04:34.640 1+0 records out 00:04:34.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015065 s, 27.2 MB/s 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:34.640 20:03:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:34.640 20:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.640 20:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.640 20:03:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.899 /dev/nbd1 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.899 1+0 records in 00:04:34.899 1+0 records out 00:04:34.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287726 s, 14.2 MB/s 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:34.899 20:03:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.899 20:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.159 { 00:04:35.159 "nbd_device": "/dev/nbd0", 00:04:35.159 "bdev_name": "Malloc0" 00:04:35.159 }, 00:04:35.159 { 00:04:35.159 "nbd_device": "/dev/nbd1", 00:04:35.159 "bdev_name": "Malloc1" 00:04:35.159 } 00:04:35.159 ]' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.159 { 00:04:35.159 "nbd_device": "/dev/nbd0", 00:04:35.159 "bdev_name": "Malloc0" 00:04:35.159 }, 00:04:35.159 { 00:04:35.159 "nbd_device": "/dev/nbd1", 00:04:35.159 "bdev_name": "Malloc1" 00:04:35.159 } 00:04:35.159 ]' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.159 /dev/nbd1' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.159 /dev/nbd1' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.159 256+0 records in 00:04:35.159 256+0 records out 00:04:35.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500436 s, 210 MB/s 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.159 256+0 records in 00:04:35.159 256+0 records out 00:04:35.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236294 s, 44.4 MB/s 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.159 20:03:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.418 256+0 records in 00:04:35.418 256+0 records out 00:04:35.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255748 s, 41.0 MB/s 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.418 20:03:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.677 20:03:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.936 20:03:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.936 20:03:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.936 20:03:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.194 20:03:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.194 20:03:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.452 20:03:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.710 [2024-05-16 20:03:23.632894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.710 [2024-05-16 20:03:23.732206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.710 [2024-05-16 20:03:23.732207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.710 [2024-05-16 20:03:23.781366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.710 [2024-05-16 20:03:23.781436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.994 20:03:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.994 20:03:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:39.994 spdk_app_start Round 1 00:04:39.994 20:03:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 97060 /var/tmp/spdk-nbd.sock 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 97060 ']' 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.994 20:03:26 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:39.994 20:03:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.994 Malloc0 00:04:39.994 20:03:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.252 Malloc1 00:04:40.252 20:03:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.252 20:03:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.252 /dev/nbd0 00:04:40.512 20:03:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.512 20:03:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.512 1+0 records in 00:04:40.512 1+0 records out 00:04:40.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141357 s, 29.0 MB/s 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:40.512 20:03:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:40.512 20:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.512 20:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.512 20:03:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.771 /dev/nbd1 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.771 1+0 records in 00:04:40.771 1+0 records out 00:04:40.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017039 s, 24.0 MB/s 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:40.771 20:03:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.771 20:03:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.030 { 00:04:41.030 "nbd_device": "/dev/nbd0", 00:04:41.030 "bdev_name": "Malloc0" 00:04:41.030 }, 00:04:41.030 { 00:04:41.030 "nbd_device": "/dev/nbd1", 00:04:41.030 "bdev_name": "Malloc1" 00:04:41.030 } 00:04:41.030 ]' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.030 { 00:04:41.030 "nbd_device": "/dev/nbd0", 00:04:41.030 "bdev_name": "Malloc0" 00:04:41.030 }, 00:04:41.030 { 00:04:41.030 "nbd_device": "/dev/nbd1", 00:04:41.030 "bdev_name": "Malloc1" 00:04:41.030 } 00:04:41.030 ]' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.030 /dev/nbd1' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.030 /dev/nbd1' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.030 256+0 records in 00:04:41.030 256+0 records out 00:04:41.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534656 s, 196 MB/s 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.030 20:03:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.030 256+0 records in 00:04:41.030 256+0 records out 00:04:41.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238366 s, 44.0 MB/s 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.030 256+0 records in 00:04:41.030 256+0 records out 00:04:41.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252227 s, 41.6 MB/s 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.030 20:03:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.288 20:03:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.546 20:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.546 20:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.546 20:03:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.547 20:03:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.805 20:03:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.805 20:03:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.063 20:03:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.321 [2024-05-16 20:03:29.377636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.580 [2024-05-16 20:03:29.493461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.580 [2024-05-16 20:03:29.493466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.580 [2024-05-16 20:03:29.554918] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.580 [2024-05-16 20:03:29.554978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.113 20:03:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.113 20:03:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:45.113 spdk_app_start Round 2 00:04:45.113 20:03:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 97060 /var/tmp/spdk-nbd.sock 00:04:45.113 20:03:32 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 97060 ']' 00:04:45.113 20:03:32 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.113 20:03:32 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.113 20:03:32 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.113 20:03:32 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.113 20:03:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.372 20:03:32 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.372 20:03:32 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:45.372 20:03:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.630 Malloc0 00:04:45.630 20:03:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.889 Malloc1 00:04:45.889 20:03:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.889 20:03:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.149 /dev/nbd0 00:04:46.149 20:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.149 20:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.149 1+0 records in 00:04:46.149 1+0 records out 00:04:46.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201611 s, 20.3 MB/s 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:46.149 20:03:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:46.149 20:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.149 20:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.149 20:03:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.408 /dev/nbd1 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.408 1+0 records in 00:04:46.408 1+0 records out 00:04:46.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178095 s, 23.0 MB/s 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:46.408 20:03:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.408 20:03:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.665 { 00:04:46.665 "nbd_device": "/dev/nbd0", 00:04:46.665 "bdev_name": "Malloc0" 00:04:46.665 }, 00:04:46.665 { 00:04:46.665 "nbd_device": "/dev/nbd1", 00:04:46.665 "bdev_name": "Malloc1" 00:04:46.665 } 00:04:46.665 ]' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.665 { 00:04:46.665 "nbd_device": "/dev/nbd0", 00:04:46.665 "bdev_name": "Malloc0" 00:04:46.665 }, 00:04:46.665 { 00:04:46.665 "nbd_device": "/dev/nbd1", 00:04:46.665 "bdev_name": "Malloc1" 00:04:46.665 } 00:04:46.665 ]' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.665 /dev/nbd1' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.665 /dev/nbd1' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.665 256+0 records in 00:04:46.665 256+0 records out 00:04:46.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515941 s, 203 MB/s 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.665 256+0 records in 00:04:46.665 256+0 records out 00:04:46.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023901 s, 43.9 MB/s 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.665 256+0 records in 00:04:46.665 256+0 records out 00:04:46.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254214 s, 41.2 MB/s 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.665 20:03:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.923 20:03:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.182 20:03:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.183 20:03:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.183 20:03:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.183 20:03:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.183 20:03:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.441 20:03:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.441 20:03:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.700 20:03:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.267 [2024-05-16 20:03:35.105492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.267 [2024-05-16 20:03:35.220285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.267 [2024-05-16 20:03:35.220290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.267 [2024-05-16 20:03:35.274242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.267 [2024-05-16 20:03:35.274310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.798 20:03:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 97060 /var/tmp/spdk-nbd.sock 00:04:50.798 20:03:37 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 97060 ']' 00:04:50.798 20:03:37 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.798 20:03:37 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:50.798 20:03:37 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.798 20:03:37 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:50.798 20:03:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.056 20:03:38 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.056 20:03:38 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:51.057 20:03:38 event.app_repeat -- event/event.sh@39 -- # killprocess 97060 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 97060 ']' 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 97060 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97060 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97060' 00:04:51.057 killing process with pid 97060 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@965 -- # kill 97060 00:04:51.057 20:03:38 event.app_repeat -- common/autotest_common.sh@970 -- # wait 97060 00:04:51.315 spdk_app_start is called in Round 0. 00:04:51.315 Shutdown signal received, stop current app iteration 00:04:51.315 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:04:51.315 spdk_app_start is called in Round 1. 00:04:51.315 Shutdown signal received, stop current app iteration 00:04:51.315 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:04:51.315 spdk_app_start is called in Round 2. 00:04:51.315 Shutdown signal received, stop current app iteration 00:04:51.315 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:04:51.315 spdk_app_start is called in Round 3. 00:04:51.315 Shutdown signal received, stop current app iteration 00:04:51.315 20:03:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:51.315 20:03:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:51.315 00:04:51.315 real 0m17.745s 00:04:51.315 user 0m38.882s 00:04:51.315 sys 0m3.203s 00:04:51.315 20:03:38 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.315 20:03:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.315 ************************************ 00:04:51.315 END TEST app_repeat 00:04:51.315 ************************************ 00:04:51.315 20:03:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:51.315 20:03:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:51.315 20:03:38 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.315 20:03:38 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.315 20:03:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.315 ************************************ 00:04:51.315 START TEST cpu_locks 00:04:51.315 ************************************ 00:04:51.315 20:03:38 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:51.315 * Looking for test storage... 00:04:51.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:51.315 20:03:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:51.315 20:03:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:51.315 20:03:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:51.315 20:03:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:51.315 20:03:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.315 20:03:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.315 20:03:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.574 ************************************ 00:04:51.574 START TEST default_locks 00:04:51.574 ************************************ 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=99412 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 99412 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 99412 ']' 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:51.574 20:03:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.574 [2024-05-16 20:03:38.531220] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:51.574 [2024-05-16 20:03:38.531302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99412 ] 00:04:51.574 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.574 [2024-05-16 20:03:38.596341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.574 [2024-05-16 20:03:38.713102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.510 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:52.510 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:52.510 20:03:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 99412 00:04:52.510 20:03:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 99412 00:04:52.510 20:03:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.769 lslocks: write error 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 99412 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 99412 ']' 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 99412 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99412 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:52.769 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:52.770 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99412' 00:04:52.770 killing process with pid 99412 00:04:52.770 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 99412 00:04:52.770 20:03:39 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 99412 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 99412 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 99412 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 99412 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 99412 ']' 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (99412) - No such process 00:04:53.337 ERROR: process (pid: 99412) is no longer running 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.337 00:04:53.337 real 0m1.706s 00:04:53.337 user 0m1.816s 00:04:53.337 sys 0m0.546s 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.337 20:03:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.337 ************************************ 00:04:53.337 END TEST default_locks 00:04:53.337 ************************************ 00:04:53.337 20:03:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:53.337 20:03:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.337 20:03:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.337 20:03:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.337 ************************************ 00:04:53.337 START TEST default_locks_via_rpc 00:04:53.337 ************************************ 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=99698 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 99698 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 99698 ']' 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:53.338 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.338 [2024-05-16 20:03:40.291819] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:53.338 [2024-05-16 20:03:40.291942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99698 ] 00:04:53.338 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.338 [2024-05-16 20:03:40.351233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.338 [2024-05-16 20:03:40.462381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 99698 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 99698 00:04:53.604 20:03:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 99698 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 99698 ']' 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 99698 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99698 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99698' 00:04:54.171 killing process with pid 99698 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 99698 00:04:54.171 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 99698 00:04:54.429 00:04:54.429 real 0m1.293s 00:04:54.429 user 0m1.241s 00:04:54.429 sys 0m0.539s 00:04:54.429 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.429 20:03:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.429 ************************************ 00:04:54.429 END TEST default_locks_via_rpc 00:04:54.429 ************************************ 00:04:54.429 20:03:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:54.429 20:03:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.429 20:03:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.429 20:03:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.687 ************************************ 00:04:54.687 START TEST non_locking_app_on_locked_coremask 00:04:54.687 ************************************ 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=99868 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 99868 /var/tmp/spdk.sock 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 99868 ']' 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.687 20:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.687 [2024-05-16 20:03:41.636707] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:54.687 [2024-05-16 20:03:41.636794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99868 ] 00:04:54.687 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.687 [2024-05-16 20:03:41.693574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.687 [2024-05-16 20:03:41.801453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=99872 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 99872 /var/tmp/spdk2.sock 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 99872 ']' 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.945 20:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.203 [2024-05-16 20:03:42.098096] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:55.203 [2024-05-16 20:03:42.098179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99872 ] 00:04:55.203 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.203 [2024-05-16 20:03:42.196423] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.203 [2024-05-16 20:03:42.196472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.461 [2024-05-16 20:03:42.435206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.027 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:56.027 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:56.027 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 99868 00:04:56.027 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 99868 00:04:56.027 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.594 lslocks: write error 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 99868 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 99868 ']' 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 99868 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99868 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99868' 00:04:56.594 killing process with pid 99868 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 99868 00:04:56.594 20:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 99868 00:04:57.528 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 99872 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 99872 ']' 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 99872 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99872 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99872' 00:04:57.529 killing process with pid 99872 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 99872 00:04:57.529 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 99872 00:04:57.787 00:04:57.787 real 0m3.307s 00:04:57.787 user 0m3.461s 00:04:57.787 sys 0m1.044s 00:04:57.787 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.787 20:03:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.787 ************************************ 00:04:57.787 END TEST non_locking_app_on_locked_coremask 00:04:57.787 ************************************ 00:04:57.787 20:03:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:57.787 20:03:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.787 20:03:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.787 20:03:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.045 ************************************ 00:04:58.045 START TEST locking_app_on_unlocked_coremask 00:04:58.045 ************************************ 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=100300 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 100300 /var/tmp/spdk.sock 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 100300 ']' 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.045 20:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.045 [2024-05-16 20:03:44.995981] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:58.045 [2024-05-16 20:03:44.996073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100300 ] 00:04:58.045 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.045 [2024-05-16 20:03:45.053428] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.045 [2024-05-16 20:03:45.053467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.045 [2024-05-16 20:03:45.163118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=100306 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 100306 /var/tmp/spdk2.sock 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 100306 ']' 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.304 20:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.562 [2024-05-16 20:03:45.477312] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:58.562 [2024-05-16 20:03:45.477397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100306 ] 00:04:58.562 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.562 [2024-05-16 20:03:45.570247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.820 [2024-05-16 20:03:45.807275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.387 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.387 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:59.387 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 100306 00:04:59.387 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 100306 00:04:59.387 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.951 lslocks: write error 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 100300 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 100300 ']' 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 100300 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100300 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100300' 00:04:59.951 killing process with pid 100300 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 100300 00:04:59.951 20:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 100300 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 100306 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 100306 ']' 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 100306 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100306 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100306' 00:05:00.885 killing process with pid 100306 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 100306 00:05:00.885 20:03:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 100306 00:05:01.144 00:05:01.144 real 0m3.305s 00:05:01.144 user 0m3.417s 00:05:01.144 sys 0m1.069s 00:05:01.144 20:03:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.144 20:03:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.144 ************************************ 00:05:01.144 END TEST locking_app_on_unlocked_coremask 00:05:01.144 ************************************ 00:05:01.144 20:03:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:01.144 20:03:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.144 20:03:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.144 20:03:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.404 ************************************ 00:05:01.404 START TEST locking_app_on_locked_coremask 00:05:01.404 ************************************ 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=100732 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 100732 /var/tmp/spdk.sock 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 100732 ']' 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:01.404 20:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.404 [2024-05-16 20:03:48.353771] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:01.404 [2024-05-16 20:03:48.353864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100732 ] 00:05:01.404 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.404 [2024-05-16 20:03:48.413549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.404 [2024-05-16 20:03:48.532191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=100870 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 100870 /var/tmp/spdk2.sock 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 100870 /var/tmp/spdk2.sock 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 100870 /var/tmp/spdk2.sock 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 100870 ']' 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:02.341 20:03:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.341 [2024-05-16 20:03:49.324366] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:02.341 [2024-05-16 20:03:49.324430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100870 ] 00:05:02.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.341 [2024-05-16 20:03:49.411086] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 100732 has claimed it. 00:05:02.341 [2024-05-16 20:03:49.411157] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (100870) - No such process 00:05:02.908 ERROR: process (pid: 100870) is no longer running 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 100732 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 100732 00:05:02.908 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.477 lslocks: write error 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 100732 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 100732 ']' 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 100732 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100732 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100732' 00:05:03.477 killing process with pid 100732 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 100732 00:05:03.477 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 100732 00:05:03.753 00:05:03.753 real 0m2.572s 00:05:03.753 user 0m2.918s 00:05:03.753 sys 0m0.680s 00:05:03.753 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.753 20:03:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.753 ************************************ 00:05:03.753 END TEST locking_app_on_locked_coremask 00:05:03.753 ************************************ 00:05:03.753 20:03:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:04.012 20:03:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.012 20:03:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.012 20:03:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.012 ************************************ 00:05:04.012 START TEST locking_overlapped_coremask 00:05:04.012 ************************************ 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=101044 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 101044 /var/tmp/spdk.sock 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 101044 ']' 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:04.012 20:03:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.012 [2024-05-16 20:03:50.980063] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:04.012 [2024-05-16 20:03:50.980160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101044 ] 00:05:04.012 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.012 [2024-05-16 20:03:51.037912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.012 [2024-05-16 20:03:51.150008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.012 [2024-05-16 20:03:51.150065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.012 [2024-05-16 20:03:51.150068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=101168 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 101168 /var/tmp/spdk2.sock 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 101168 /var/tmp/spdk2.sock 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 101168 /var/tmp/spdk2.sock 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 101168 ']' 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:04.270 20:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.527 [2024-05-16 20:03:51.439421] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:04.527 [2024-05-16 20:03:51.439499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101168 ] 00:05:04.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.527 [2024-05-16 20:03:51.525170] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 101044 has claimed it. 00:05:04.527 [2024-05-16 20:03:51.525218] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (101168) - No such process 00:05:05.092 ERROR: process (pid: 101168) is no longer running 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 101044 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 101044 ']' 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 101044 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101044 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101044' 00:05:05.092 killing process with pid 101044 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 101044 00:05:05.092 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 101044 00:05:05.658 00:05:05.658 real 0m1.646s 00:05:05.658 user 0m4.357s 00:05:05.658 sys 0m0.420s 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.658 ************************************ 00:05:05.658 END TEST locking_overlapped_coremask 00:05:05.658 ************************************ 00:05:05.658 20:03:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:05.658 20:03:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.658 20:03:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.658 20:03:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.658 ************************************ 00:05:05.658 START TEST locking_overlapped_coremask_via_rpc 00:05:05.658 ************************************ 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=101341 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 101341 /var/tmp/spdk.sock 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 101341 ']' 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:05.658 20:03:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.658 [2024-05-16 20:03:52.670066] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:05.658 [2024-05-16 20:03:52.670140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101341 ] 00:05:05.658 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.658 [2024-05-16 20:03:52.734774] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.658 [2024-05-16 20:03:52.734820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.916 [2024-05-16 20:03:52.857982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.916 [2024-05-16 20:03:52.858031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.916 [2024-05-16 20:03:52.858049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=101351 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 101351 /var/tmp/spdk2.sock 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 101351 ']' 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.175 20:03:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.175 [2024-05-16 20:03:53.141597] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:06.175 [2024-05-16 20:03:53.141680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101351 ] 00:05:06.175 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.175 [2024-05-16 20:03:53.229706] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.175 [2024-05-16 20:03:53.229749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.433 [2024-05-16 20:03:53.453001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.433 [2024-05-16 20:03:53.453058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:06.433 [2024-05-16 20:03:53.453061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.999 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.999 [2024-05-16 20:03:54.082947] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 101341 has claimed it. 00:05:06.999 request: 00:05:06.999 { 00:05:06.999 "method": "framework_enable_cpumask_locks", 00:05:06.999 "req_id": 1 00:05:06.999 } 00:05:06.999 Got JSON-RPC error response 00:05:06.999 response: 00:05:06.999 { 00:05:06.999 "code": -32603, 00:05:06.999 "message": "Failed to claim CPU core: 2" 00:05:07.000 } 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 101341 /var/tmp/spdk.sock 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 101341 ']' 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:07.000 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 101351 /var/tmp/spdk2.sock 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 101351 ']' 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:07.258 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:07.517 00:05:07.517 real 0m1.969s 00:05:07.517 user 0m1.039s 00:05:07.517 sys 0m0.184s 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.517 20:03:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.517 ************************************ 00:05:07.517 END TEST locking_overlapped_coremask_via_rpc 00:05:07.517 ************************************ 00:05:07.517 20:03:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:07.517 20:03:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 101341 ]] 00:05:07.517 20:03:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 101341 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 101341 ']' 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 101341 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101341 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101341' 00:05:07.517 killing process with pid 101341 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 101341 00:05:07.517 20:03:54 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 101341 00:05:08.084 20:03:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 101351 ]] 00:05:08.084 20:03:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 101351 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 101351 ']' 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 101351 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101351 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101351' 00:05:08.084 killing process with pid 101351 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 101351 00:05:08.084 20:03:55 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 101351 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 101341 ]] 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 101341 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 101341 ']' 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 101341 00:05:08.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (101341) - No such process 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 101341 is not found' 00:05:08.652 Process with pid 101341 is not found 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 101351 ]] 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 101351 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 101351 ']' 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 101351 00:05:08.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (101351) - No such process 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 101351 is not found' 00:05:08.652 Process with pid 101351 is not found 00:05:08.652 20:03:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.652 00:05:08.652 real 0m17.104s 00:05:08.652 user 0m28.921s 00:05:08.652 sys 0m5.321s 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.652 20:03:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.652 ************************************ 00:05:08.652 END TEST cpu_locks 00:05:08.652 ************************************ 00:05:08.652 00:05:08.652 real 0m43.314s 00:05:08.652 user 1m21.381s 00:05:08.652 sys 0m9.320s 00:05:08.652 20:03:55 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.652 20:03:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.652 ************************************ 00:05:08.652 END TEST event 00:05:08.652 ************************************ 00:05:08.652 20:03:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:08.652 20:03:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.652 20:03:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.652 20:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.652 ************************************ 00:05:08.652 START TEST thread 00:05:08.652 ************************************ 00:05:08.653 20:03:55 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:08.653 * Looking for test storage... 00:05:08.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:08.653 20:03:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.653 20:03:55 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:08.653 20:03:55 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.653 20:03:55 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.653 ************************************ 00:05:08.653 START TEST thread_poller_perf 00:05:08.653 ************************************ 00:05:08.653 20:03:55 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.653 [2024-05-16 20:03:55.674660] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:08.653 [2024-05-16 20:03:55.674719] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101835 ] 00:05:08.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.653 [2024-05-16 20:03:55.732456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.911 [2024-05-16 20:03:55.839798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.911 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:09.846 ====================================== 00:05:09.846 busy:2707431484 (cyc) 00:05:09.846 total_run_count: 293000 00:05:09.846 tsc_hz: 2700000000 (cyc) 00:05:09.846 ====================================== 00:05:09.846 poller_cost: 9240 (cyc), 3422 (nsec) 00:05:09.846 00:05:09.846 real 0m1.302s 00:05:09.846 user 0m1.217s 00:05:09.846 sys 0m0.079s 00:05:09.846 20:03:56 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.846 20:03:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 ************************************ 00:05:09.846 END TEST thread_poller_perf 00:05:09.846 ************************************ 00:05:09.846 20:03:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.846 20:03:56 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:09.846 20:03:56 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.846 20:03:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.105 ************************************ 00:05:10.105 START TEST thread_poller_perf 00:05:10.105 ************************************ 00:05:10.105 20:03:57 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:10.105 [2024-05-16 20:03:57.026357] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:10.105 [2024-05-16 20:03:57.026419] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101991 ] 00:05:10.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.105 [2024-05-16 20:03:57.091143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.105 [2024-05-16 20:03:57.209211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.105 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:11.480 ====================================== 00:05:11.480 busy:2702821969 (cyc) 00:05:11.480 total_run_count: 3851000 00:05:11.480 tsc_hz: 2700000000 (cyc) 00:05:11.480 ====================================== 00:05:11.480 poller_cost: 701 (cyc), 259 (nsec) 00:05:11.480 00:05:11.480 real 0m1.319s 00:05:11.480 user 0m1.233s 00:05:11.480 sys 0m0.080s 00:05:11.480 20:03:58 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.481 20:03:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.481 ************************************ 00:05:11.481 END TEST thread_poller_perf 00:05:11.481 ************************************ 00:05:11.481 20:03:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:11.481 00:05:11.481 real 0m2.769s 00:05:11.481 user 0m2.515s 00:05:11.481 sys 0m0.247s 00:05:11.481 20:03:58 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.481 20:03:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.481 ************************************ 00:05:11.481 END TEST thread 00:05:11.481 ************************************ 00:05:11.481 20:03:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:11.481 20:03:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.481 20:03:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.481 20:03:58 -- common/autotest_common.sh@10 -- # set +x 00:05:11.481 ************************************ 00:05:11.481 START TEST accel 00:05:11.481 ************************************ 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:11.481 * Looking for test storage... 00:05:11.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:11.481 20:03:58 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:11.481 20:03:58 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:11.481 20:03:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.481 20:03:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=102185 00:05:11.481 20:03:58 accel -- accel/accel.sh@63 -- # waitforlisten 102185 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@827 -- # '[' -z 102185 ']' 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.481 20:03:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:11.481 20:03:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.481 20:03:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.481 20:03:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.481 20:03:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.481 20:03:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.481 20:03:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.481 20:03:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.481 20:03:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:11.481 20:03:58 accel -- accel/accel.sh@41 -- # jq -r . 00:05:11.481 [2024-05-16 20:03:58.500407] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:11.481 [2024-05-16 20:03:58.500498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102185 ] 00:05:11.481 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.481 [2024-05-16 20:03:58.566751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.740 [2024-05-16 20:03:58.686471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.307 20:03:59 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.307 20:03:59 accel -- common/autotest_common.sh@860 -- # return 0 00:05:12.307 20:03:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:12.307 20:03:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:12.307 20:03:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:12.307 20:03:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:12.307 20:03:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:12.307 20:03:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:12.307 20:03:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.307 20:03:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.307 20:03:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:12.307 20:03:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # IFS== 00:05:12.566 20:03:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:12.566 20:03:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:12.566 20:03:59 accel -- accel/accel.sh@75 -- # killprocess 102185 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@946 -- # '[' -z 102185 ']' 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@950 -- # kill -0 102185 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@951 -- # uname 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102185 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102185' 00:05:12.566 killing process with pid 102185 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@965 -- # kill 102185 00:05:12.566 20:03:59 accel -- common/autotest_common.sh@970 -- # wait 102185 00:05:12.826 20:03:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:12.826 20:03:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:12.826 20:03:59 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:12.826 20:03:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.826 20:03:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.826 20:03:59 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:12.826 20:03:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:12.826 20:03:59 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.826 20:03:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:13.090 20:03:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:13.090 20:03:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:13.090 20:03:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.090 20:03:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.090 ************************************ 00:05:13.090 START TEST accel_missing_filename 00:05:13.090 ************************************ 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.090 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:13.090 20:04:00 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:13.090 [2024-05-16 20:04:00.039742] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:13.090 [2024-05-16 20:04:00.039823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102377 ] 00:05:13.090 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.090 [2024-05-16 20:04:00.104178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.090 [2024-05-16 20:04:00.223017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.352 [2024-05-16 20:04:00.283630] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.352 [2024-05-16 20:04:00.363944] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:13.352 A filename is required. 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.352 00:05:13.352 real 0m0.461s 00:05:13.352 user 0m0.350s 00:05:13.352 sys 0m0.145s 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.352 20:04:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:13.352 ************************************ 00:05:13.352 END TEST accel_missing_filename 00:05:13.352 ************************************ 00:05:13.611 20:04:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.611 20:04:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:13.611 20:04:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.611 20:04:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.611 ************************************ 00:05:13.611 START TEST accel_compress_verify 00:05:13.611 ************************************ 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.611 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:13.611 20:04:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:13.611 [2024-05-16 20:04:00.550355] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:13.611 [2024-05-16 20:04:00.550416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102507 ] 00:05:13.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.611 [2024-05-16 20:04:00.612441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.611 [2024-05-16 20:04:00.730597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.870 [2024-05-16 20:04:00.791286] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.870 [2024-05-16 20:04:00.862752] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:13.870 00:05:13.870 Compression does not support the verify option, aborting. 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.870 00:05:13.870 real 0m0.449s 00:05:13.870 user 0m0.338s 00:05:13.870 sys 0m0.144s 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.870 20:04:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:13.870 ************************************ 00:05:13.870 END TEST accel_compress_verify 00:05:13.870 ************************************ 00:05:13.870 20:04:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:13.870 20:04:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:13.870 20:04:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.870 20:04:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.130 ************************************ 00:05:14.130 START TEST accel_wrong_workload 00:05:14.130 ************************************ 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:14.130 20:04:01 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:14.130 Unsupported workload type: foobar 00:05:14.130 [2024-05-16 20:04:01.044511] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:14.130 accel_perf options: 00:05:14.130 [-h help message] 00:05:14.130 [-q queue depth per core] 00:05:14.130 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:14.130 [-T number of threads per core 00:05:14.130 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:14.130 [-t time in seconds] 00:05:14.130 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:14.130 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:14.130 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:14.130 [-l for compress/decompress workloads, name of uncompressed input file 00:05:14.130 [-S for crc32c workload, use this seed value (default 0) 00:05:14.130 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:14.130 [-f for fill workload, use this BYTE value (default 255) 00:05:14.130 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:14.130 [-y verify result if this switch is on] 00:05:14.130 [-a tasks to allocate per core (default: same value as -q)] 00:05:14.130 Can be used to spread operations across a wider range of memory. 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.130 00:05:14.130 real 0m0.025s 00:05:14.130 user 0m0.013s 00:05:14.130 sys 0m0.011s 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.130 20:04:01 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:14.130 ************************************ 00:05:14.130 END TEST accel_wrong_workload 00:05:14.130 ************************************ 00:05:14.130 Error: writing output failed: Broken pipe 00:05:14.130 20:04:01 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:14.130 20:04:01 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:14.130 20:04:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.130 20:04:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.130 ************************************ 00:05:14.130 START TEST accel_negative_buffers 00:05:14.130 ************************************ 00:05:14.130 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:14.131 20:04:01 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:14.131 -x option must be non-negative. 00:05:14.131 [2024-05-16 20:04:01.116141] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:14.131 accel_perf options: 00:05:14.131 [-h help message] 00:05:14.131 [-q queue depth per core] 00:05:14.131 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:14.131 [-T number of threads per core 00:05:14.131 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:14.131 [-t time in seconds] 00:05:14.131 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:14.131 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:14.131 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:14.131 [-l for compress/decompress workloads, name of uncompressed input file 00:05:14.131 [-S for crc32c workload, use this seed value (default 0) 00:05:14.131 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:14.131 [-f for fill workload, use this BYTE value (default 255) 00:05:14.131 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:14.131 [-y verify result if this switch is on] 00:05:14.131 [-a tasks to allocate per core (default: same value as -q)] 00:05:14.131 Can be used to spread operations across a wider range of memory. 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.131 00:05:14.131 real 0m0.022s 00:05:14.131 user 0m0.013s 00:05:14.131 sys 0m0.008s 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.131 20:04:01 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:14.131 ************************************ 00:05:14.131 END TEST accel_negative_buffers 00:05:14.131 ************************************ 00:05:14.131 Error: writing output failed: Broken pipe 00:05:14.131 20:04:01 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:14.131 20:04:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:14.131 20:04:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.131 20:04:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.131 ************************************ 00:05:14.131 START TEST accel_crc32c 00:05:14.131 ************************************ 00:05:14.131 20:04:01 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:14.131 20:04:01 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:14.131 [2024-05-16 20:04:01.186140] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:14.131 [2024-05-16 20:04:01.186205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102620 ] 00:05:14.131 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.131 [2024-05-16 20:04:01.253178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.390 [2024-05-16 20:04:01.373543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.390 20:04:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:15.766 20:04:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.766 00:05:15.766 real 0m1.462s 00:05:15.766 user 0m1.323s 00:05:15.766 sys 0m0.141s 00:05:15.766 20:04:02 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.766 20:04:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:15.766 ************************************ 00:05:15.766 END TEST accel_crc32c 00:05:15.766 ************************************ 00:05:15.766 20:04:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:15.766 20:04:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:15.766 20:04:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.766 20:04:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.766 ************************************ 00:05:15.766 START TEST accel_crc32c_C2 00:05:15.766 ************************************ 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.766 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:15.766 [2024-05-16 20:04:02.697106] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:15.766 [2024-05-16 20:04:02.697190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102855 ] 00:05:15.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.766 [2024-05-16 20:04:02.757102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.766 [2024-05-16 20:04:02.875351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.025 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.026 20:04:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.403 00:05:17.403 real 0m1.451s 00:05:17.403 user 0m1.320s 00:05:17.403 sys 0m0.133s 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.403 20:04:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:17.403 ************************************ 00:05:17.403 END TEST accel_crc32c_C2 00:05:17.403 ************************************ 00:05:17.403 20:04:04 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:17.403 20:04:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:17.403 20:04:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.403 20:04:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.403 ************************************ 00:05:17.403 START TEST accel_copy 00:05:17.403 ************************************ 00:05:17.403 20:04:04 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:17.403 [2024-05-16 20:04:04.202773] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:17.403 [2024-05-16 20:04:04.202835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103007 ] 00:05:17.403 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.403 [2024-05-16 20:04:04.266931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.403 [2024-05-16 20:04:04.384351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.403 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.404 20:04:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:18.780 20:04:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.780 00:05:18.780 real 0m1.458s 00:05:18.780 user 0m1.333s 00:05:18.780 sys 0m0.126s 00:05:18.780 20:04:05 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.780 20:04:05 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:18.780 ************************************ 00:05:18.780 END TEST accel_copy 00:05:18.780 ************************************ 00:05:18.780 20:04:05 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.780 20:04:05 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:18.780 20:04:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.780 20:04:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.780 ************************************ 00:05:18.780 START TEST accel_fill 00:05:18.780 ************************************ 00:05:18.780 20:04:05 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:18.780 20:04:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:18.780 [2024-05-16 20:04:05.711021] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:18.780 [2024-05-16 20:04:05.711081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103258 ] 00:05:18.780 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.780 [2024-05-16 20:04:05.773069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.780 [2024-05-16 20:04:05.890222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:19.040 20:04:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:20.418 20:04:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.418 00:05:20.418 real 0m1.466s 00:05:20.418 user 0m1.331s 00:05:20.418 sys 0m0.136s 00:05:20.418 20:04:07 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.418 20:04:07 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:20.418 ************************************ 00:05:20.418 END TEST accel_fill 00:05:20.418 ************************************ 00:05:20.418 20:04:07 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:20.418 20:04:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:20.418 20:04:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.418 20:04:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.418 ************************************ 00:05:20.418 START TEST accel_copy_crc32c 00:05:20.418 ************************************ 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:20.418 [2024-05-16 20:04:07.224996] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:20.418 [2024-05-16 20:04:07.225057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103443 ] 00:05:20.418 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.418 [2024-05-16 20:04:07.286574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.418 [2024-05-16 20:04:07.403827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.418 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.419 20:04:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.794 00:05:21.794 real 0m1.458s 00:05:21.794 user 0m1.324s 00:05:21.794 sys 0m0.136s 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.794 20:04:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:21.794 ************************************ 00:05:21.794 END TEST accel_copy_crc32c 00:05:21.794 ************************************ 00:05:21.794 20:04:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:21.794 20:04:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:21.794 20:04:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.794 20:04:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.794 ************************************ 00:05:21.794 START TEST accel_copy_crc32c_C2 00:05:21.794 ************************************ 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.794 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.795 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.795 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.795 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.795 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.795 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.795 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:21.795 [2024-05-16 20:04:08.731309] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:21.795 [2024-05-16 20:04:08.731372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103601 ] 00:05:21.795 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.795 [2024-05-16 20:04:08.793346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.795 [2024-05-16 20:04:08.913217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.054 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.055 20:04:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.432 00:05:23.432 real 0m1.460s 00:05:23.432 user 0m1.314s 00:05:23.432 sys 0m0.148s 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.432 20:04:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:23.432 ************************************ 00:05:23.432 END TEST accel_copy_crc32c_C2 00:05:23.432 ************************************ 00:05:23.432 20:04:10 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:23.432 20:04:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:23.432 20:04:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.432 20:04:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.432 ************************************ 00:05:23.432 START TEST accel_dualcast 00:05:23.432 ************************************ 00:05:23.432 20:04:10 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:23.432 [2024-05-16 20:04:10.234550] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:23.432 [2024-05-16 20:04:10.234617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103874 ] 00:05:23.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.432 [2024-05-16 20:04:10.299027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.432 [2024-05-16 20:04:10.416807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.432 20:04:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:24.810 20:04:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.810 00:05:24.810 real 0m1.463s 00:05:24.810 user 0m1.318s 00:05:24.810 sys 0m0.146s 00:05:24.810 20:04:11 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.810 20:04:11 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:24.810 ************************************ 00:05:24.810 END TEST accel_dualcast 00:05:24.810 ************************************ 00:05:24.810 20:04:11 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:24.810 20:04:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:24.810 20:04:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.810 20:04:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.810 ************************************ 00:05:24.810 START TEST accel_compare 00:05:24.810 ************************************ 00:05:24.810 20:04:11 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:24.810 20:04:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:24.810 [2024-05-16 20:04:11.750405] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:24.810 [2024-05-16 20:04:11.750468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104034 ] 00:05:24.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.810 [2024-05-16 20:04:11.811831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.810 [2024-05-16 20:04:11.930032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 20:04:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:26.447 20:04:13 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.447 00:05:26.447 real 0m1.456s 00:05:26.447 user 0m1.316s 00:05:26.447 sys 0m0.141s 00:05:26.447 20:04:13 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.447 20:04:13 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:26.447 ************************************ 00:05:26.447 END TEST accel_compare 00:05:26.447 ************************************ 00:05:26.447 20:04:13 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:26.447 20:04:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:26.447 20:04:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.447 20:04:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.447 ************************************ 00:05:26.447 START TEST accel_xor 00:05:26.447 ************************************ 00:05:26.447 20:04:13 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:26.447 [2024-05-16 20:04:13.253608] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:26.447 [2024-05-16 20:04:13.253663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104189 ] 00:05:26.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.447 [2024-05-16 20:04:13.314299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.447 [2024-05-16 20:04:13.433018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.447 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.448 20:04:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.822 00:05:27.822 real 0m1.450s 00:05:27.822 user 0m1.321s 00:05:27.822 sys 0m0.130s 00:05:27.822 20:04:14 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.822 20:04:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:27.822 ************************************ 00:05:27.822 END TEST accel_xor 00:05:27.822 ************************************ 00:05:27.822 20:04:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:27.822 20:04:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:27.822 20:04:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.822 20:04:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.822 ************************************ 00:05:27.822 START TEST accel_xor 00:05:27.822 ************************************ 00:05:27.822 20:04:14 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:27.822 20:04:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:27.822 [2024-05-16 20:04:14.752500] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:27.822 [2024-05-16 20:04:14.752571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104460 ] 00:05:27.822 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.822 [2024-05-16 20:04:14.816457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.822 [2024-05-16 20:04:14.935378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.081 20:04:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:29.458 20:04:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.459 20:04:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:29.459 20:04:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.459 00:05:29.459 real 0m1.462s 00:05:29.459 user 0m1.320s 00:05:29.459 sys 0m0.143s 00:05:29.459 20:04:16 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.459 20:04:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:29.459 ************************************ 00:05:29.459 END TEST accel_xor 00:05:29.459 ************************************ 00:05:29.459 20:04:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:29.459 20:04:16 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:29.459 20:04:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.459 20:04:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.459 ************************************ 00:05:29.459 START TEST accel_dif_verify 00:05:29.459 ************************************ 00:05:29.459 20:04:16 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:29.459 [2024-05-16 20:04:16.268915] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:29.459 [2024-05-16 20:04:16.268976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104620 ] 00:05:29.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.459 [2024-05-16 20:04:16.330871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.459 [2024-05-16 20:04:16.446972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.459 20:04:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:30.831 20:04:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.831 00:05:30.831 real 0m1.458s 00:05:30.831 user 0m1.327s 00:05:30.831 sys 0m0.136s 00:05:30.831 20:04:17 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.831 20:04:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:30.831 ************************************ 00:05:30.831 END TEST accel_dif_verify 00:05:30.831 ************************************ 00:05:30.831 20:04:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:30.831 20:04:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:30.831 20:04:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.831 20:04:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.831 ************************************ 00:05:30.831 START TEST accel_dif_generate 00:05:30.831 ************************************ 00:05:30.831 20:04:17 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:30.831 20:04:17 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:30.831 [2024-05-16 20:04:17.775600] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:30.832 [2024-05-16 20:04:17.775653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104776 ] 00:05:30.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.832 [2024-05-16 20:04:17.839228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.832 [2024-05-16 20:04:17.956132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:31.091 20:04:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.465 20:04:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:32.465 20:04:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:32.465 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.465 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.465 20:04:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:32.465 20:04:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:32.466 20:04:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.466 00:05:32.466 real 0m1.458s 00:05:32.466 user 0m1.326s 00:05:32.466 sys 0m0.135s 00:05:32.466 20:04:19 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.466 20:04:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:32.466 ************************************ 00:05:32.466 END TEST accel_dif_generate 00:05:32.466 ************************************ 00:05:32.466 20:04:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:32.466 20:04:19 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:32.466 20:04:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.466 20:04:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.466 ************************************ 00:05:32.466 START TEST accel_dif_generate_copy 00:05:32.466 ************************************ 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:32.466 [2024-05-16 20:04:19.284013] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:32.466 [2024-05-16 20:04:19.284074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105057 ] 00:05:32.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.466 [2024-05-16 20:04:19.347975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.466 [2024-05-16 20:04:19.466281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.466 20:04:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.841 00:05:33.841 real 0m1.462s 00:05:33.841 user 0m1.320s 00:05:33.841 sys 0m0.144s 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.841 20:04:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:33.841 ************************************ 00:05:33.841 END TEST accel_dif_generate_copy 00:05:33.841 ************************************ 00:05:33.841 20:04:20 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:33.841 20:04:20 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.841 20:04:20 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:33.841 20:04:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.841 20:04:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.841 ************************************ 00:05:33.841 START TEST accel_comp 00:05:33.841 ************************************ 00:05:33.841 20:04:20 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:33.841 20:04:20 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:33.841 [2024-05-16 20:04:20.797673] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:33.841 [2024-05-16 20:04:20.797741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105209 ] 00:05:33.841 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.841 [2024-05-16 20:04:20.860442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.841 [2024-05-16 20:04:20.977103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:34.100 20:04:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:35.517 20:04:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.517 00:05:35.517 real 0m1.470s 00:05:35.517 user 0m1.340s 00:05:35.517 sys 0m0.132s 00:05:35.517 20:04:22 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.517 20:04:22 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:35.517 ************************************ 00:05:35.517 END TEST accel_comp 00:05:35.517 ************************************ 00:05:35.517 20:04:22 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.517 20:04:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:35.517 20:04:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.517 20:04:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.517 ************************************ 00:05:35.517 START TEST accel_decomp 00:05:35.517 ************************************ 00:05:35.517 20:04:22 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:35.517 [2024-05-16 20:04:22.320254] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:35.517 [2024-05-16 20:04:22.320320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105375 ] 00:05:35.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.517 [2024-05-16 20:04:22.381803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.517 [2024-05-16 20:04:22.502211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.517 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.518 20:04:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.962 20:04:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.962 00:05:36.962 real 0m1.468s 00:05:36.963 user 0m1.330s 00:05:36.963 sys 0m0.140s 00:05:36.963 20:04:23 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.963 20:04:23 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:36.963 ************************************ 00:05:36.963 END TEST accel_decomp 00:05:36.963 ************************************ 00:05:36.963 20:04:23 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:36.963 20:04:23 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:36.963 20:04:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.963 20:04:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.963 ************************************ 00:05:36.963 START TEST accel_decmop_full 00:05:36.963 ************************************ 00:05:36.963 20:04:23 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:36.963 20:04:23 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:36.963 [2024-05-16 20:04:23.839981] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:36.963 [2024-05-16 20:04:23.840043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105653 ] 00:05:36.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.963 [2024-05-16 20:04:23.903914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.963 [2024-05-16 20:04:24.021805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:36.963 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.964 20:04:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.413 20:04:25 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.413 00:05:38.413 real 0m1.483s 00:05:38.413 user 0m1.338s 00:05:38.413 sys 0m0.147s 00:05:38.413 20:04:25 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.413 20:04:25 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:38.413 ************************************ 00:05:38.413 END TEST accel_decmop_full 00:05:38.413 ************************************ 00:05:38.413 20:04:25 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:38.413 20:04:25 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:38.413 20:04:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.413 20:04:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.413 ************************************ 00:05:38.413 START TEST accel_decomp_mcore 00:05:38.413 ************************************ 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:38.413 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:38.413 [2024-05-16 20:04:25.377164] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:38.413 [2024-05-16 20:04:25.377238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105818 ] 00:05:38.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.413 [2024-05-16 20:04:25.441519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.691 [2024-05-16 20:04:25.574298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.691 [2024-05-16 20:04:25.574347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.691 [2024-05-16 20:04:25.576871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.691 [2024-05-16 20:04:25.576938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.691 20:04:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.164 00:05:40.164 real 0m1.477s 00:05:40.164 user 0m4.730s 00:05:40.164 sys 0m0.148s 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.164 20:04:26 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:40.164 ************************************ 00:05:40.165 END TEST accel_decomp_mcore 00:05:40.165 ************************************ 00:05:40.165 20:04:26 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:40.165 20:04:26 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:40.165 20:04:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.165 20:04:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.165 ************************************ 00:05:40.165 START TEST accel_decomp_full_mcore 00:05:40.165 ************************************ 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:40.165 20:04:26 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:40.165 [2024-05-16 20:04:26.903043] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:40.165 [2024-05-16 20:04:26.903102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105990 ] 00:05:40.165 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.165 [2024-05-16 20:04:26.968156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.165 [2024-05-16 20:04:27.090525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.165 [2024-05-16 20:04:27.090588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.165 [2024-05-16 20:04:27.090676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.165 [2024-05-16 20:04:27.090679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.165 20:04:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.610 00:05:41.610 real 0m1.484s 00:05:41.610 user 0m4.788s 00:05:41.610 sys 0m0.147s 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.610 20:04:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:41.610 ************************************ 00:05:41.610 END TEST accel_decomp_full_mcore 00:05:41.610 ************************************ 00:05:41.610 20:04:28 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:41.610 20:04:28 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:41.610 20:04:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.610 20:04:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.610 ************************************ 00:05:41.610 START TEST accel_decomp_mthread 00:05:41.610 ************************************ 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:41.610 [2024-05-16 20:04:28.438033] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:41.610 [2024-05-16 20:04:28.438090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106274 ] 00:05:41.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.610 [2024-05-16 20:04:28.499156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.610 [2024-05-16 20:04:28.617487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:41.610 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.611 20:04:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.029 00:05:43.029 real 0m1.463s 00:05:43.029 user 0m1.319s 00:05:43.029 sys 0m0.147s 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.029 20:04:29 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:43.029 ************************************ 00:05:43.029 END TEST accel_decomp_mthread 00:05:43.029 ************************************ 00:05:43.029 20:04:29 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.029 20:04:29 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:43.029 20:04:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.029 20:04:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.029 ************************************ 00:05:43.029 START TEST accel_decomp_full_mthread 00:05:43.029 ************************************ 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:43.029 20:04:29 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:43.029 [2024-05-16 20:04:29.954715] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:43.029 [2024-05-16 20:04:29.954782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106438 ] 00:05:43.029 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.029 [2024-05-16 20:04:30.019183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.029 [2024-05-16 20:04:30.142813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.288 20:04:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.660 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.660 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.660 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.660 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.661 00:05:44.661 real 0m1.501s 00:05:44.661 user 0m1.351s 00:05:44.661 sys 0m0.152s 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.661 20:04:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:44.661 ************************************ 00:05:44.661 END TEST accel_decomp_full_mthread 00:05:44.661 ************************************ 00:05:44.661 20:04:31 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:44.661 20:04:31 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:44.661 20:04:31 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:44.661 20:04:31 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:44.661 20:04:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.661 20:04:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.661 20:04:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.661 20:04:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.661 20:04:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.661 20:04:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.661 20:04:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.661 20:04:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:44.661 20:04:31 accel -- accel/accel.sh@41 -- # jq -r . 00:05:44.661 ************************************ 00:05:44.661 START TEST accel_dif_functional_tests 00:05:44.661 ************************************ 00:05:44.661 20:04:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:44.661 [2024-05-16 20:04:31.531229] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:44.661 [2024-05-16 20:04:31.531287] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106712 ] 00:05:44.661 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.661 [2024-05-16 20:04:31.590570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.661 [2024-05-16 20:04:31.710306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.661 [2024-05-16 20:04:31.710359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.661 [2024-05-16 20:04:31.710377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.661 00:05:44.661 00:05:44.661 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.661 http://cunit.sourceforge.net/ 00:05:44.661 00:05:44.661 00:05:44.661 Suite: accel_dif 00:05:44.661 Test: verify: DIF generated, GUARD check ...passed 00:05:44.661 Test: verify: DIF generated, APPTAG check ...passed 00:05:44.661 Test: verify: DIF generated, REFTAG check ...passed 00:05:44.661 Test: verify: DIF not generated, GUARD check ...[2024-05-16 20:04:31.795288] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:44.661 passed 00:05:44.661 Test: verify: DIF not generated, APPTAG check ...[2024-05-16 20:04:31.795349] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:44.661 passed 00:05:44.661 Test: verify: DIF not generated, REFTAG check ...[2024-05-16 20:04:31.795381] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:44.661 passed 00:05:44.661 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:44.661 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-16 20:04:31.795449] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:44.661 passed 00:05:44.661 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:44.661 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:44.661 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:44.661 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-16 20:04:31.795582] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:44.661 passed 00:05:44.661 Test: verify copy: DIF generated, GUARD check ...passed 00:05:44.661 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:44.661 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:44.661 Test: verify copy: DIF not generated, GUARD check ...[2024-05-16 20:04:31.795725] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:44.661 passed 00:05:44.661 Test: verify copy: DIF not generated, APPTAG check ...[2024-05-16 20:04:31.795762] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:44.661 passed 00:05:44.661 Test: verify copy: DIF not generated, REFTAG check ...[2024-05-16 20:04:31.795795] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:44.661 passed 00:05:44.661 Test: generate copy: DIF generated, GUARD check ...passed 00:05:44.661 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:44.661 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:44.661 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:44.661 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:44.661 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:44.661 Test: generate copy: iovecs-len validate ...[2024-05-16 20:04:31.796067] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:44.661 passed 00:05:44.661 Test: generate copy: buffer alignment validate ...passed 00:05:44.661 00:05:44.661 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.661 suites 1 1 n/a 0 0 00:05:44.661 tests 26 26 26 0 0 00:05:44.661 asserts 115 115 115 0 n/a 00:05:44.661 00:05:44.661 Elapsed time = 0.003 seconds 00:05:44.919 00:05:44.919 real 0m0.547s 00:05:44.919 user 0m0.792s 00:05:44.919 sys 0m0.176s 00:05:44.919 20:04:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.919 20:04:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:44.919 ************************************ 00:05:44.919 END TEST accel_dif_functional_tests 00:05:44.919 ************************************ 00:05:44.919 00:05:44.919 real 0m33.656s 00:05:44.919 user 0m37.117s 00:05:44.919 sys 0m4.547s 00:05:44.919 20:04:32 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.919 20:04:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.919 ************************************ 00:05:44.919 END TEST accel 00:05:44.919 ************************************ 00:05:45.177 20:04:32 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:45.177 20:04:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.177 20:04:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.177 20:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:45.177 ************************************ 00:05:45.177 START TEST accel_rpc 00:05:45.177 ************************************ 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:45.177 * Looking for test storage... 00:05:45.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:45.177 20:04:32 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.177 20:04:32 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=106785 00:05:45.177 20:04:32 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:45.177 20:04:32 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 106785 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 106785 ']' 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.177 20:04:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.177 [2024-05-16 20:04:32.202629] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:45.177 [2024-05-16 20:04:32.202726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106785 ] 00:05:45.177 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.177 [2024-05-16 20:04:32.260050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.435 [2024-05-16 20:04:32.365585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.435 20:04:32 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.435 20:04:32 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:45.435 20:04:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:45.435 20:04:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:45.435 20:04:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:45.435 20:04:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:45.435 20:04:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:45.435 20:04:32 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.436 20:04:32 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.436 20:04:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.436 ************************************ 00:05:45.436 START TEST accel_assign_opcode 00:05:45.436 ************************************ 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.436 [2024-05-16 20:04:32.430259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.436 [2024-05-16 20:04:32.438263] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.436 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.694 software 00:05:45.694 00:05:45.694 real 0m0.286s 00:05:45.694 user 0m0.035s 00:05:45.694 sys 0m0.007s 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.694 20:04:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.694 ************************************ 00:05:45.694 END TEST accel_assign_opcode 00:05:45.694 ************************************ 00:05:45.694 20:04:32 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 106785 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 106785 ']' 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 106785 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106785 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106785' 00:05:45.694 killing process with pid 106785 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@965 -- # kill 106785 00:05:45.694 20:04:32 accel_rpc -- common/autotest_common.sh@970 -- # wait 106785 00:05:46.261 00:05:46.261 real 0m1.083s 00:05:46.261 user 0m1.012s 00:05:46.261 sys 0m0.406s 00:05:46.261 20:04:33 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.261 20:04:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.261 ************************************ 00:05:46.261 END TEST accel_rpc 00:05:46.261 ************************************ 00:05:46.261 20:04:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:46.261 20:04:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.261 20:04:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.261 20:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.261 ************************************ 00:05:46.261 START TEST app_cmdline 00:05:46.261 ************************************ 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:46.261 * Looking for test storage... 00:05:46.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:46.261 20:04:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:46.261 20:04:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=106989 00:05:46.261 20:04:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:46.261 20:04:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 106989 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 106989 ']' 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.261 20:04:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.261 [2024-05-16 20:04:33.333766] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:46.261 [2024-05-16 20:04:33.333874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106989 ] 00:05:46.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.261 [2024-05-16 20:04:33.396461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.520 [2024-05-16 20:04:33.513692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.780 20:04:33 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.780 20:04:33 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:46.780 20:04:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:47.040 { 00:05:47.040 "version": "SPDK v24.09-pre git sha1 cf8ec7cfe", 00:05:47.040 "fields": { 00:05:47.040 "major": 24, 00:05:47.040 "minor": 9, 00:05:47.040 "patch": 0, 00:05:47.040 "suffix": "-pre", 00:05:47.040 "commit": "cf8ec7cfe" 00:05:47.040 } 00:05:47.040 } 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:47.040 20:04:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:47.040 20:04:34 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.299 request: 00:05:47.299 { 00:05:47.299 "method": "env_dpdk_get_mem_stats", 00:05:47.299 "req_id": 1 00:05:47.299 } 00:05:47.299 Got JSON-RPC error response 00:05:47.299 response: 00:05:47.299 { 00:05:47.299 "code": -32601, 00:05:47.299 "message": "Method not found" 00:05:47.299 } 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.299 20:04:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 106989 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 106989 ']' 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 106989 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106989 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106989' 00:05:47.299 killing process with pid 106989 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@965 -- # kill 106989 00:05:47.299 20:04:34 app_cmdline -- common/autotest_common.sh@970 -- # wait 106989 00:05:47.867 00:05:47.867 real 0m1.528s 00:05:47.867 user 0m1.869s 00:05:47.867 sys 0m0.446s 00:05:47.867 20:04:34 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.867 20:04:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.867 ************************************ 00:05:47.867 END TEST app_cmdline 00:05:47.867 ************************************ 00:05:47.867 20:04:34 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:47.867 20:04:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.867 20:04:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.867 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.867 ************************************ 00:05:47.867 START TEST version 00:05:47.867 ************************************ 00:05:47.867 20:04:34 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:47.867 * Looking for test storage... 00:05:47.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:47.867 20:04:34 version -- app/version.sh@17 -- # get_header_version major 00:05:47.867 20:04:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # cut -f2 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.867 20:04:34 version -- app/version.sh@17 -- # major=24 00:05:47.867 20:04:34 version -- app/version.sh@18 -- # get_header_version minor 00:05:47.867 20:04:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # cut -f2 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.867 20:04:34 version -- app/version.sh@18 -- # minor=9 00:05:47.867 20:04:34 version -- app/version.sh@19 -- # get_header_version patch 00:05:47.867 20:04:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # cut -f2 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.867 20:04:34 version -- app/version.sh@19 -- # patch=0 00:05:47.867 20:04:34 version -- app/version.sh@20 -- # get_header_version suffix 00:05:47.867 20:04:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # cut -f2 00:05:47.867 20:04:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.867 20:04:34 version -- app/version.sh@20 -- # suffix=-pre 00:05:47.867 20:04:34 version -- app/version.sh@22 -- # version=24.9 00:05:47.867 20:04:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:47.867 20:04:34 version -- app/version.sh@28 -- # version=24.9rc0 00:05:47.868 20:04:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:47.868 20:04:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:47.868 20:04:34 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:47.868 20:04:34 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:47.868 00:05:47.868 real 0m0.098s 00:05:47.868 user 0m0.055s 00:05:47.868 sys 0m0.063s 00:05:47.868 20:04:34 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.868 20:04:34 version -- common/autotest_common.sh@10 -- # set +x 00:05:47.868 ************************************ 00:05:47.868 END TEST version 00:05:47.868 ************************************ 00:05:47.868 20:04:34 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@198 -- # uname -s 00:05:47.868 20:04:34 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:47.868 20:04:34 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:47.868 20:04:34 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:47.868 20:04:34 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:47.868 20:04:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.868 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.868 20:04:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:47.868 20:04:34 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:47.868 20:04:34 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:47.868 20:04:34 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:47.868 20:04:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.868 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.868 ************************************ 00:05:47.868 START TEST nvmf_tcp 00:05:47.868 ************************************ 00:05:47.868 20:04:34 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:48.127 * Looking for test storage... 00:05:48.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.127 20:04:35 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.127 20:04:35 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.127 20:04:35 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.127 20:04:35 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.128 20:04:35 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:48.128 20:04:35 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:48.128 20:04:35 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:48.128 20:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:48.128 20:04:35 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:48.128 20:04:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:48.128 20:04:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.128 20:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.128 ************************************ 00:05:48.128 START TEST nvmf_example 00:05:48.128 ************************************ 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:48.128 * Looking for test storage... 00:05:48.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:48.128 20:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:50.031 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:50.032 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:50.032 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:50.032 Found net devices under 0000:09:00.0: cvl_0_0 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:50.032 Found net devices under 0000:09:00.1: cvl_0_1 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:50.032 20:04:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:50.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:05:50.032 00:05:50.032 --- 10.0.0.2 ping statistics --- 00:05:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.032 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:05:50.032 00:05:50.032 --- 10.0.0.1 ping statistics --- 00:05:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.032 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=108982 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 108982 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 108982 ']' 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.032 20:04:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.966 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:51.225 20:04:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:51.225 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.199 Initializing NVMe Controllers 00:06:01.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:01.199 Initialization complete. Launching workers. 00:06:01.199 ======================================================== 00:06:01.199 Latency(us) 00:06:01.199 Device Information : IOPS MiB/s Average min max 00:06:01.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15171.13 59.26 4218.46 643.94 20186.78 00:06:01.199 ======================================================== 00:06:01.199 Total : 15171.13 59.26 4218.46 643.94 20186.78 00:06:01.199 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:01.457 rmmod nvme_tcp 00:06:01.457 rmmod nvme_fabrics 00:06:01.457 rmmod nvme_keyring 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 108982 ']' 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 108982 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 108982 ']' 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 108982 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108982 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108982' 00:06:01.457 killing process with pid 108982 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 108982 00:06:01.457 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 108982 00:06:01.717 nvmf threads initialize successfully 00:06:01.717 bdev subsystem init successfully 00:06:01.717 created a nvmf target service 00:06:01.717 create targets's poll groups done 00:06:01.717 all subsystems of target started 00:06:01.717 nvmf target is running 00:06:01.717 all subsystems of target stopped 00:06:01.717 destroy targets's poll groups done 00:06:01.717 destroyed the nvmf target service 00:06:01.717 bdev subsystem finish successfully 00:06:01.717 nvmf threads destroy successfully 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:01.717 20:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.623 20:04:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:03.624 20:04:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:03.624 20:04:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.624 20:04:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.624 00:06:03.624 real 0m15.647s 00:06:03.624 user 0m44.005s 00:06:03.624 sys 0m3.533s 00:06:03.624 20:04:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.624 20:04:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.624 ************************************ 00:06:03.624 END TEST nvmf_example 00:06:03.624 ************************************ 00:06:03.624 20:04:50 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:03.624 20:04:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:03.624 20:04:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.624 20:04:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.624 ************************************ 00:06:03.624 START TEST nvmf_filesystem 00:06:03.624 ************************************ 00:06:03.624 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:03.886 * Looking for test storage... 00:06:03.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:03.886 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:03.887 #define SPDK_CONFIG_H 00:06:03.887 #define SPDK_CONFIG_APPS 1 00:06:03.887 #define SPDK_CONFIG_ARCH native 00:06:03.887 #undef SPDK_CONFIG_ASAN 00:06:03.887 #undef SPDK_CONFIG_AVAHI 00:06:03.887 #undef SPDK_CONFIG_CET 00:06:03.887 #define SPDK_CONFIG_COVERAGE 1 00:06:03.887 #define SPDK_CONFIG_CROSS_PREFIX 00:06:03.887 #undef SPDK_CONFIG_CRYPTO 00:06:03.887 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:03.887 #undef SPDK_CONFIG_CUSTOMOCF 00:06:03.887 #undef SPDK_CONFIG_DAOS 00:06:03.887 #define SPDK_CONFIG_DAOS_DIR 00:06:03.887 #define SPDK_CONFIG_DEBUG 1 00:06:03.887 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:03.887 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:03.887 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:03.887 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:03.887 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:03.887 #undef SPDK_CONFIG_DPDK_UADK 00:06:03.887 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:03.887 #define SPDK_CONFIG_EXAMPLES 1 00:06:03.887 #undef SPDK_CONFIG_FC 00:06:03.887 #define SPDK_CONFIG_FC_PATH 00:06:03.887 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:03.887 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:03.887 #undef SPDK_CONFIG_FUSE 00:06:03.887 #undef SPDK_CONFIG_FUZZER 00:06:03.887 #define SPDK_CONFIG_FUZZER_LIB 00:06:03.887 #undef SPDK_CONFIG_GOLANG 00:06:03.887 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:03.887 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:03.887 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:03.887 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:03.887 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:03.887 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:03.887 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:03.887 #define SPDK_CONFIG_IDXD 1 00:06:03.887 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:03.887 #undef SPDK_CONFIG_IPSEC_MB 00:06:03.887 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:03.887 #define SPDK_CONFIG_ISAL 1 00:06:03.887 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:03.887 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:03.887 #define SPDK_CONFIG_LIBDIR 00:06:03.887 #undef SPDK_CONFIG_LTO 00:06:03.887 #define SPDK_CONFIG_MAX_LCORES 00:06:03.887 #define SPDK_CONFIG_NVME_CUSE 1 00:06:03.887 #undef SPDK_CONFIG_OCF 00:06:03.887 #define SPDK_CONFIG_OCF_PATH 00:06:03.887 #define SPDK_CONFIG_OPENSSL_PATH 00:06:03.887 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:03.887 #define SPDK_CONFIG_PGO_DIR 00:06:03.887 #undef SPDK_CONFIG_PGO_USE 00:06:03.887 #define SPDK_CONFIG_PREFIX /usr/local 00:06:03.887 #undef SPDK_CONFIG_RAID5F 00:06:03.887 #undef SPDK_CONFIG_RBD 00:06:03.887 #define SPDK_CONFIG_RDMA 1 00:06:03.887 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:03.887 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:03.887 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:03.887 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:03.887 #define SPDK_CONFIG_SHARED 1 00:06:03.887 #undef SPDK_CONFIG_SMA 00:06:03.887 #define SPDK_CONFIG_TESTS 1 00:06:03.887 #undef SPDK_CONFIG_TSAN 00:06:03.887 #define SPDK_CONFIG_UBLK 1 00:06:03.887 #define SPDK_CONFIG_UBSAN 1 00:06:03.887 #undef SPDK_CONFIG_UNIT_TESTS 00:06:03.887 #undef SPDK_CONFIG_URING 00:06:03.887 #define SPDK_CONFIG_URING_PATH 00:06:03.887 #undef SPDK_CONFIG_URING_ZNS 00:06:03.887 #undef SPDK_CONFIG_USDT 00:06:03.887 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:03.887 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:03.887 #define SPDK_CONFIG_VFIO_USER 1 00:06:03.887 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:03.887 #define SPDK_CONFIG_VHOST 1 00:06:03.887 #define SPDK_CONFIG_VIRTIO 1 00:06:03.887 #undef SPDK_CONFIG_VTUNE 00:06:03.887 #define SPDK_CONFIG_VTUNE_DIR 00:06:03.887 #define SPDK_CONFIG_WERROR 1 00:06:03.887 #define SPDK_CONFIG_WPDK_DIR 00:06:03.887 #undef SPDK_CONFIG_XNVME 00:06:03.887 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:03.887 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.888 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 110711 ]] 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 110711 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.N9WNsn 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.N9WNsn/tests/target /tmp/spdk.N9WNsn 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=973852672 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4310577152 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=56912965632 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994700800 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5081735168 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993973248 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997348352 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:06:03.889 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398940160 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8757248 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30997045248 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=307200 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:03.890 * Looking for test storage... 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=56912965632 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=7296327680 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.890 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:03.891 20:04:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.796 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:05.797 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:05.797 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:05.797 Found net devices under 0000:09:00.0: cvl_0_0 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:05.797 Found net devices under 0000:09:00.1: cvl_0_1 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:05.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:06:05.797 00:06:05.797 --- 10.0.0.2 ping statistics --- 00:06:05.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.797 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:05.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:06:05.797 00:06:05.797 --- 10.0.0.1 ping statistics --- 00:06:05.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.797 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.797 20:04:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.056 ************************************ 00:06:06.056 START TEST nvmf_filesystem_no_in_capsule 00:06:06.056 ************************************ 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=112334 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 112334 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 112334 ']' 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.056 20:04:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.056 [2024-05-16 20:04:52.994616] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:06.056 [2024-05-16 20:04:52.994706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:06.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.056 [2024-05-16 20:04:53.057698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.056 [2024-05-16 20:04:53.168222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:06.056 [2024-05-16 20:04:53.168272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:06.056 [2024-05-16 20:04:53.168303] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.056 [2024-05-16 20:04:53.168315] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.056 [2024-05-16 20:04:53.168324] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:06.056 [2024-05-16 20:04:53.168401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.056 [2024-05-16 20:04:53.168685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.056 [2024-05-16 20:04:53.168742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.056 [2024-05-16 20:04:53.168745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.315 [2024-05-16 20:04:53.309357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.315 Malloc1 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.315 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.574 [2024-05-16 20:04:53.478876] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:06.574 [2024-05-16 20:04:53.479168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:06.574 { 00:06:06.574 "name": "Malloc1", 00:06:06.574 "aliases": [ 00:06:06.574 "5764dbfc-ca2d-4464-8786-e3d6c3138ae2" 00:06:06.574 ], 00:06:06.574 "product_name": "Malloc disk", 00:06:06.574 "block_size": 512, 00:06:06.574 "num_blocks": 1048576, 00:06:06.574 "uuid": "5764dbfc-ca2d-4464-8786-e3d6c3138ae2", 00:06:06.574 "assigned_rate_limits": { 00:06:06.574 "rw_ios_per_sec": 0, 00:06:06.574 "rw_mbytes_per_sec": 0, 00:06:06.574 "r_mbytes_per_sec": 0, 00:06:06.574 "w_mbytes_per_sec": 0 00:06:06.574 }, 00:06:06.574 "claimed": true, 00:06:06.574 "claim_type": "exclusive_write", 00:06:06.574 "zoned": false, 00:06:06.574 "supported_io_types": { 00:06:06.574 "read": true, 00:06:06.574 "write": true, 00:06:06.574 "unmap": true, 00:06:06.574 "write_zeroes": true, 00:06:06.574 "flush": true, 00:06:06.574 "reset": true, 00:06:06.574 "compare": false, 00:06:06.574 "compare_and_write": false, 00:06:06.574 "abort": true, 00:06:06.574 "nvme_admin": false, 00:06:06.574 "nvme_io": false 00:06:06.574 }, 00:06:06.574 "memory_domains": [ 00:06:06.574 { 00:06:06.574 "dma_device_id": "system", 00:06:06.574 "dma_device_type": 1 00:06:06.574 }, 00:06:06.574 { 00:06:06.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.574 "dma_device_type": 2 00:06:06.574 } 00:06:06.574 ], 00:06:06.574 "driver_specific": {} 00:06:06.574 } 00:06:06.574 ]' 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:06.574 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:06.575 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:06.575 20:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:07.139 20:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:07.139 20:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:07.139 20:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:07.139 20:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:07.139 20:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:09.663 20:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:10.596 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:10.596 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:10.596 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:10.596 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.596 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:10.854 ************************************ 00:06:10.854 START TEST filesystem_ext4 00:06:10.854 ************************************ 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:10.854 mke2fs 1.46.5 (30-Dec-2021) 00:06:10.854 Discarding device blocks: 0/522240 done 00:06:10.854 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:10.854 Filesystem UUID: 01460972-a284-462d-a353-4ba3f4044161 00:06:10.854 Superblock backups stored on blocks: 00:06:10.854 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:10.854 00:06:10.854 Allocating group tables: 0/64 done 00:06:10.854 Writing inode tables: 0/64 done 00:06:10.854 Creating journal (8192 blocks): done 00:06:10.854 Writing superblocks and filesystem accounting information: 0/64 done 00:06:10.854 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:10.854 20:04:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 112334 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.787 00:06:11.787 real 0m0.990s 00:06:11.787 user 0m0.013s 00:06:11.787 sys 0m0.031s 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:11.787 ************************************ 00:06:11.787 END TEST filesystem_ext4 00:06:11.787 ************************************ 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.787 ************************************ 00:06:11.787 START TEST filesystem_btrfs 00:06:11.787 ************************************ 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:11.787 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:12.044 btrfs-progs v6.6.2 00:06:12.044 See https://btrfs.readthedocs.io for more information. 00:06:12.044 00:06:12.044 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:12.044 NOTE: several default settings have changed in version 5.15, please make sure 00:06:12.044 this does not affect your deployments: 00:06:12.044 - DUP for metadata (-m dup) 00:06:12.044 - enabled no-holes (-O no-holes) 00:06:12.044 - enabled free-space-tree (-R free-space-tree) 00:06:12.044 00:06:12.044 Label: (null) 00:06:12.045 UUID: 5ab6797c-eab5-4dc2-ace2-028a52d413ba 00:06:12.045 Node size: 16384 00:06:12.045 Sector size: 4096 00:06:12.045 Filesystem size: 510.00MiB 00:06:12.045 Block group profiles: 00:06:12.045 Data: single 8.00MiB 00:06:12.045 Metadata: DUP 32.00MiB 00:06:12.045 System: DUP 8.00MiB 00:06:12.045 SSD detected: yes 00:06:12.045 Zoned device: no 00:06:12.045 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:12.045 Runtime features: free-space-tree 00:06:12.045 Checksum: crc32c 00:06:12.045 Number of devices: 1 00:06:12.045 Devices: 00:06:12.045 ID SIZE PATH 00:06:12.045 1 510.00MiB /dev/nvme0n1p1 00:06:12.045 00:06:12.045 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:12.045 20:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 112334 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:12.303 00:06:12.303 real 0m0.490s 00:06:12.303 user 0m0.013s 00:06:12.303 sys 0m0.068s 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:12.303 ************************************ 00:06:12.303 END TEST filesystem_btrfs 00:06:12.303 ************************************ 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.303 ************************************ 00:06:12.303 START TEST filesystem_xfs 00:06:12.303 ************************************ 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:12.303 20:04:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:12.303 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:12.303 = sectsz=512 attr=2, projid32bit=1 00:06:12.303 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:12.303 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:12.303 data = bsize=4096 blocks=130560, imaxpct=25 00:06:12.303 = sunit=0 swidth=0 blks 00:06:12.303 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:12.303 log =internal log bsize=4096 blocks=16384, version=2 00:06:12.303 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:12.303 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:13.676 Discarding blocks...Done. 00:06:13.676 20:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:13.676 20:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 112334 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:16.199 00:06:16.199 real 0m3.940s 00:06:16.199 user 0m0.016s 00:06:16.199 sys 0m0.055s 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:16.199 ************************************ 00:06:16.199 END TEST filesystem_xfs 00:06:16.199 ************************************ 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:16.199 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:16.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 112334 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 112334 ']' 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 112334 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112334 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112334' 00:06:16.458 killing process with pid 112334 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 112334 00:06:16.458 [2024-05-16 20:05:03.434564] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:16.458 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 112334 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:17.024 00:06:17.024 real 0m10.943s 00:06:17.024 user 0m41.788s 00:06:17.024 sys 0m1.547s 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.024 ************************************ 00:06:17.024 END TEST nvmf_filesystem_no_in_capsule 00:06:17.024 ************************************ 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.024 ************************************ 00:06:17.024 START TEST nvmf_filesystem_in_capsule 00:06:17.024 ************************************ 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=113784 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 113784 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 113784 ']' 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.024 20:05:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.024 [2024-05-16 20:05:03.994849] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:17.024 [2024-05-16 20:05:03.994956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.024 [2024-05-16 20:05:04.060103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.024 [2024-05-16 20:05:04.168291] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.024 [2024-05-16 20:05:04.168347] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.024 [2024-05-16 20:05:04.168361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.024 [2024-05-16 20:05:04.168373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.024 [2024-05-16 20:05:04.168383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.024 [2024-05-16 20:05:04.168503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.024 [2024-05-16 20:05:04.168564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.024 [2024-05-16 20:05:04.168629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.024 [2024-05-16 20:05:04.168631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.283 [2024-05-16 20:05:04.309604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.283 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.541 Malloc1 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.541 [2024-05-16 20:05:04.485758] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:17.541 [2024-05-16 20:05:04.486063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:17.541 { 00:06:17.541 "name": "Malloc1", 00:06:17.541 "aliases": [ 00:06:17.541 "a822f104-4de3-4631-83a0-925cd194b918" 00:06:17.541 ], 00:06:17.541 "product_name": "Malloc disk", 00:06:17.541 "block_size": 512, 00:06:17.541 "num_blocks": 1048576, 00:06:17.541 "uuid": "a822f104-4de3-4631-83a0-925cd194b918", 00:06:17.541 "assigned_rate_limits": { 00:06:17.541 "rw_ios_per_sec": 0, 00:06:17.541 "rw_mbytes_per_sec": 0, 00:06:17.541 "r_mbytes_per_sec": 0, 00:06:17.541 "w_mbytes_per_sec": 0 00:06:17.541 }, 00:06:17.541 "claimed": true, 00:06:17.541 "claim_type": "exclusive_write", 00:06:17.541 "zoned": false, 00:06:17.541 "supported_io_types": { 00:06:17.541 "read": true, 00:06:17.541 "write": true, 00:06:17.541 "unmap": true, 00:06:17.541 "write_zeroes": true, 00:06:17.541 "flush": true, 00:06:17.541 "reset": true, 00:06:17.541 "compare": false, 00:06:17.541 "compare_and_write": false, 00:06:17.541 "abort": true, 00:06:17.541 "nvme_admin": false, 00:06:17.541 "nvme_io": false 00:06:17.541 }, 00:06:17.541 "memory_domains": [ 00:06:17.541 { 00:06:17.541 "dma_device_id": "system", 00:06:17.541 "dma_device_type": 1 00:06:17.541 }, 00:06:17.541 { 00:06:17.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.541 "dma_device_type": 2 00:06:17.541 } 00:06:17.541 ], 00:06:17.541 "driver_specific": {} 00:06:17.541 } 00:06:17.541 ]' 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:17.541 20:05:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:18.106 20:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:18.106 20:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:18.106 20:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:18.107 20:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:18.107 20:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:20.006 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:20.265 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:20.265 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:20.265 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:20.523 20:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:21.457 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:21.457 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:21.457 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:21.457 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.457 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.715 ************************************ 00:06:21.715 START TEST filesystem_in_capsule_ext4 00:06:21.715 ************************************ 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:21.715 20:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:21.715 mke2fs 1.46.5 (30-Dec-2021) 00:06:21.715 Discarding device blocks: 0/522240 done 00:06:21.715 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:21.715 Filesystem UUID: b064a8e2-55fd-42c9-a22b-91b8ec18c598 00:06:21.715 Superblock backups stored on blocks: 00:06:21.715 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:21.715 00:06:21.715 Allocating group tables: 0/64 done 00:06:21.715 Writing inode tables: 0/64 done 00:06:21.973 Creating journal (8192 blocks): done 00:06:22.796 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:22.796 00:06:22.796 20:05:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:22.796 20:05:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 113784 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:23.729 00:06:23.729 real 0m2.072s 00:06:23.729 user 0m0.014s 00:06:23.729 sys 0m0.028s 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:23.729 ************************************ 00:06:23.729 END TEST filesystem_in_capsule_ext4 00:06:23.729 ************************************ 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.729 ************************************ 00:06:23.729 START TEST filesystem_in_capsule_btrfs 00:06:23.729 ************************************ 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:23.729 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:23.987 btrfs-progs v6.6.2 00:06:23.987 See https://btrfs.readthedocs.io for more information. 00:06:23.987 00:06:23.987 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:23.987 NOTE: several default settings have changed in version 5.15, please make sure 00:06:23.987 this does not affect your deployments: 00:06:23.987 - DUP for metadata (-m dup) 00:06:23.987 - enabled no-holes (-O no-holes) 00:06:23.987 - enabled free-space-tree (-R free-space-tree) 00:06:23.987 00:06:23.987 Label: (null) 00:06:23.987 UUID: 273b1e53-c34c-45d9-9746-07e3323b37dd 00:06:23.987 Node size: 16384 00:06:23.987 Sector size: 4096 00:06:23.987 Filesystem size: 510.00MiB 00:06:23.987 Block group profiles: 00:06:23.987 Data: single 8.00MiB 00:06:23.987 Metadata: DUP 32.00MiB 00:06:23.987 System: DUP 8.00MiB 00:06:23.987 SSD detected: yes 00:06:23.987 Zoned device: no 00:06:23.987 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:23.987 Runtime features: free-space-tree 00:06:23.987 Checksum: crc32c 00:06:23.987 Number of devices: 1 00:06:23.987 Devices: 00:06:23.987 ID SIZE PATH 00:06:23.987 1 510.00MiB /dev/nvme0n1p1 00:06:23.987 00:06:23.987 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:23.987 20:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 113784 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.246 00:06:24.246 real 0m0.541s 00:06:24.246 user 0m0.013s 00:06:24.246 sys 0m0.044s 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:24.246 ************************************ 00:06:24.246 END TEST filesystem_in_capsule_btrfs 00:06:24.246 ************************************ 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.246 ************************************ 00:06:24.246 START TEST filesystem_in_capsule_xfs 00:06:24.246 ************************************ 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:24.246 20:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:24.505 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:24.505 = sectsz=512 attr=2, projid32bit=1 00:06:24.505 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:24.505 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:24.505 data = bsize=4096 blocks=130560, imaxpct=25 00:06:24.505 = sunit=0 swidth=0 blks 00:06:24.505 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:24.505 log =internal log bsize=4096 blocks=16384, version=2 00:06:24.505 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:24.505 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:25.070 Discarding blocks...Done. 00:06:25.070 20:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:25.070 20:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 113784 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:27.597 00:06:27.597 real 0m3.405s 00:06:27.597 user 0m0.017s 00:06:27.597 sys 0m0.034s 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 ************************************ 00:06:27.597 END TEST filesystem_in_capsule_xfs 00:06:27.597 ************************************ 00:06:27.597 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:27.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.855 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 113784 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 113784 ']' 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 113784 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113784 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113784' 00:06:27.856 killing process with pid 113784 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 113784 00:06:27.856 [2024-05-16 20:05:14.896716] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:27.856 20:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 113784 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:28.424 00:06:28.424 real 0m11.431s 00:06:28.424 user 0m43.652s 00:06:28.424 sys 0m1.574s 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.424 ************************************ 00:06:28.424 END TEST nvmf_filesystem_in_capsule 00:06:28.424 ************************************ 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:28.424 rmmod nvme_tcp 00:06:28.424 rmmod nvme_fabrics 00:06:28.424 rmmod nvme_keyring 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.424 20:05:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.959 20:05:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:30.959 00:06:30.959 real 0m26.736s 00:06:30.959 user 1m26.310s 00:06:30.959 sys 0m4.607s 00:06:30.959 20:05:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.959 20:05:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.959 ************************************ 00:06:30.959 END TEST nvmf_filesystem 00:06:30.959 ************************************ 00:06:30.959 20:05:17 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:30.959 20:05:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:30.959 20:05:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.959 20:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.959 ************************************ 00:06:30.959 START TEST nvmf_target_discovery 00:06:30.959 ************************************ 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:30.959 * Looking for test storage... 00:06:30.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.959 20:05:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:30.960 20:05:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.960 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:30.960 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:30.960 20:05:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:30.960 20:05:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:32.863 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:32.863 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:32.863 Found net devices under 0000:09:00.0: cvl_0_0 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:32.863 Found net devices under 0000:09:00.1: cvl_0_1 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:32.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:06:32.863 00:06:32.863 --- 10.0.0.2 ping statistics --- 00:06:32.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.863 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:32.863 00:06:32.863 --- 10.0.0.1 ping statistics --- 00:06:32.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.863 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.863 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=117258 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 117258 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 117258 ']' 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.864 20:05:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.864 [2024-05-16 20:05:19.718066] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:32.864 [2024-05-16 20:05:19.718151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.864 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.864 [2024-05-16 20:05:19.790221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.864 [2024-05-16 20:05:19.910647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.864 [2024-05-16 20:05:19.910710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.864 [2024-05-16 20:05:19.910736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.864 [2024-05-16 20:05:19.910757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.864 [2024-05-16 20:05:19.910769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.864 [2024-05-16 20:05:19.910872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.864 [2024-05-16 20:05:19.910919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.864 [2024-05-16 20:05:19.910949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.864 [2024-05-16 20:05:19.910952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 [2024-05-16 20:05:20.709811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 Null1 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 [2024-05-16 20:05:20.753882] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:33.798 [2024-05-16 20:05:20.754177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 Null2 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.798 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 Null3 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 Null4 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.799 20:05:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:06:34.057 00:06:34.057 Discovery Log Number of Records 6, Generation counter 6 00:06:34.057 =====Discovery Log Entry 0====== 00:06:34.057 trtype: tcp 00:06:34.057 adrfam: ipv4 00:06:34.057 subtype: current discovery subsystem 00:06:34.057 treq: not required 00:06:34.057 portid: 0 00:06:34.057 trsvcid: 4420 00:06:34.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:34.057 traddr: 10.0.0.2 00:06:34.057 eflags: explicit discovery connections, duplicate discovery information 00:06:34.057 sectype: none 00:06:34.057 =====Discovery Log Entry 1====== 00:06:34.057 trtype: tcp 00:06:34.057 adrfam: ipv4 00:06:34.057 subtype: nvme subsystem 00:06:34.057 treq: not required 00:06:34.057 portid: 0 00:06:34.057 trsvcid: 4420 00:06:34.057 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:34.057 traddr: 10.0.0.2 00:06:34.057 eflags: none 00:06:34.057 sectype: none 00:06:34.057 =====Discovery Log Entry 2====== 00:06:34.057 trtype: tcp 00:06:34.057 adrfam: ipv4 00:06:34.057 subtype: nvme subsystem 00:06:34.057 treq: not required 00:06:34.057 portid: 0 00:06:34.057 trsvcid: 4420 00:06:34.057 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:34.057 traddr: 10.0.0.2 00:06:34.057 eflags: none 00:06:34.057 sectype: none 00:06:34.057 =====Discovery Log Entry 3====== 00:06:34.057 trtype: tcp 00:06:34.057 adrfam: ipv4 00:06:34.057 subtype: nvme subsystem 00:06:34.057 treq: not required 00:06:34.057 portid: 0 00:06:34.057 trsvcid: 4420 00:06:34.057 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:34.057 traddr: 10.0.0.2 00:06:34.057 eflags: none 00:06:34.057 sectype: none 00:06:34.057 =====Discovery Log Entry 4====== 00:06:34.057 trtype: tcp 00:06:34.057 adrfam: ipv4 00:06:34.057 subtype: nvme subsystem 00:06:34.057 treq: not required 00:06:34.057 portid: 0 00:06:34.057 trsvcid: 4420 00:06:34.057 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:34.057 traddr: 10.0.0.2 00:06:34.057 eflags: none 00:06:34.057 sectype: none 00:06:34.057 =====Discovery Log Entry 5====== 00:06:34.057 trtype: tcp 00:06:34.057 adrfam: ipv4 00:06:34.057 subtype: discovery subsystem referral 00:06:34.057 treq: not required 00:06:34.057 portid: 0 00:06:34.057 trsvcid: 4430 00:06:34.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:34.057 traddr: 10.0.0.2 00:06:34.057 eflags: none 00:06:34.057 sectype: none 00:06:34.057 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:34.057 Perform nvmf subsystem discovery via RPC 00:06:34.057 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:34.057 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.057 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.057 [ 00:06:34.057 { 00:06:34.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:34.057 "subtype": "Discovery", 00:06:34.057 "listen_addresses": [ 00:06:34.057 { 00:06:34.057 "trtype": "TCP", 00:06:34.057 "adrfam": "IPv4", 00:06:34.057 "traddr": "10.0.0.2", 00:06:34.057 "trsvcid": "4420" 00:06:34.057 } 00:06:34.057 ], 00:06:34.057 "allow_any_host": true, 00:06:34.057 "hosts": [] 00:06:34.057 }, 00:06:34.057 { 00:06:34.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:34.057 "subtype": "NVMe", 00:06:34.057 "listen_addresses": [ 00:06:34.057 { 00:06:34.057 "trtype": "TCP", 00:06:34.057 "adrfam": "IPv4", 00:06:34.057 "traddr": "10.0.0.2", 00:06:34.057 "trsvcid": "4420" 00:06:34.057 } 00:06:34.057 ], 00:06:34.057 "allow_any_host": true, 00:06:34.057 "hosts": [], 00:06:34.057 "serial_number": "SPDK00000000000001", 00:06:34.057 "model_number": "SPDK bdev Controller", 00:06:34.057 "max_namespaces": 32, 00:06:34.057 "min_cntlid": 1, 00:06:34.057 "max_cntlid": 65519, 00:06:34.057 "namespaces": [ 00:06:34.057 { 00:06:34.057 "nsid": 1, 00:06:34.057 "bdev_name": "Null1", 00:06:34.057 "name": "Null1", 00:06:34.057 "nguid": "B16BD9B32492420D87C29E5B72DF281B", 00:06:34.057 "uuid": "b16bd9b3-2492-420d-87c2-9e5b72df281b" 00:06:34.057 } 00:06:34.057 ] 00:06:34.057 }, 00:06:34.057 { 00:06:34.057 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:34.057 "subtype": "NVMe", 00:06:34.057 "listen_addresses": [ 00:06:34.057 { 00:06:34.057 "trtype": "TCP", 00:06:34.057 "adrfam": "IPv4", 00:06:34.057 "traddr": "10.0.0.2", 00:06:34.057 "trsvcid": "4420" 00:06:34.057 } 00:06:34.057 ], 00:06:34.057 "allow_any_host": true, 00:06:34.057 "hosts": [], 00:06:34.057 "serial_number": "SPDK00000000000002", 00:06:34.057 "model_number": "SPDK bdev Controller", 00:06:34.057 "max_namespaces": 32, 00:06:34.057 "min_cntlid": 1, 00:06:34.057 "max_cntlid": 65519, 00:06:34.057 "namespaces": [ 00:06:34.057 { 00:06:34.057 "nsid": 1, 00:06:34.057 "bdev_name": "Null2", 00:06:34.057 "name": "Null2", 00:06:34.057 "nguid": "4F5FA1358ADE443EB8D9333ED7070C60", 00:06:34.057 "uuid": "4f5fa135-8ade-443e-b8d9-333ed7070c60" 00:06:34.057 } 00:06:34.057 ] 00:06:34.057 }, 00:06:34.057 { 00:06:34.057 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:34.057 "subtype": "NVMe", 00:06:34.057 "listen_addresses": [ 00:06:34.057 { 00:06:34.057 "trtype": "TCP", 00:06:34.057 "adrfam": "IPv4", 00:06:34.057 "traddr": "10.0.0.2", 00:06:34.057 "trsvcid": "4420" 00:06:34.057 } 00:06:34.057 ], 00:06:34.057 "allow_any_host": true, 00:06:34.057 "hosts": [], 00:06:34.057 "serial_number": "SPDK00000000000003", 00:06:34.057 "model_number": "SPDK bdev Controller", 00:06:34.057 "max_namespaces": 32, 00:06:34.057 "min_cntlid": 1, 00:06:34.057 "max_cntlid": 65519, 00:06:34.057 "namespaces": [ 00:06:34.057 { 00:06:34.057 "nsid": 1, 00:06:34.057 "bdev_name": "Null3", 00:06:34.057 "name": "Null3", 00:06:34.057 "nguid": "3641CFB841E44634A2AF548F8681DFA6", 00:06:34.057 "uuid": "3641cfb8-41e4-4634-a2af-548f8681dfa6" 00:06:34.057 } 00:06:34.057 ] 00:06:34.057 }, 00:06:34.057 { 00:06:34.057 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:34.057 "subtype": "NVMe", 00:06:34.058 "listen_addresses": [ 00:06:34.058 { 00:06:34.058 "trtype": "TCP", 00:06:34.058 "adrfam": "IPv4", 00:06:34.058 "traddr": "10.0.0.2", 00:06:34.058 "trsvcid": "4420" 00:06:34.058 } 00:06:34.058 ], 00:06:34.058 "allow_any_host": true, 00:06:34.058 "hosts": [], 00:06:34.058 "serial_number": "SPDK00000000000004", 00:06:34.058 "model_number": "SPDK bdev Controller", 00:06:34.058 "max_namespaces": 32, 00:06:34.058 "min_cntlid": 1, 00:06:34.058 "max_cntlid": 65519, 00:06:34.058 "namespaces": [ 00:06:34.058 { 00:06:34.058 "nsid": 1, 00:06:34.058 "bdev_name": "Null4", 00:06:34.058 "name": "Null4", 00:06:34.058 "nguid": "B691E7C344DF4593A0D2AED24BDFB11D", 00:06:34.058 "uuid": "b691e7c3-44df-4593-a0d2-aed24bdfb11d" 00:06:34.058 } 00:06:34.058 ] 00:06:34.058 } 00:06:34.058 ] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:34.058 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:34.058 rmmod nvme_tcp 00:06:34.317 rmmod nvme_fabrics 00:06:34.317 rmmod nvme_keyring 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 117258 ']' 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 117258 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 117258 ']' 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 117258 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 117258 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 117258' 00:06:34.317 killing process with pid 117258 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 117258 00:06:34.317 [2024-05-16 20:05:21.275693] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:34.317 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 117258 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:34.576 20:05:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.483 20:05:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:36.483 00:06:36.483 real 0m6.034s 00:06:36.483 user 0m7.268s 00:06:36.483 sys 0m1.817s 00:06:36.483 20:05:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.483 20:05:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.483 ************************************ 00:06:36.483 END TEST nvmf_target_discovery 00:06:36.483 ************************************ 00:06:36.483 20:05:23 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:36.483 20:05:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:36.483 20:05:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.483 20:05:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.741 ************************************ 00:06:36.741 START TEST nvmf_referrals 00:06:36.741 ************************************ 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:36.741 * Looking for test storage... 00:06:36.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.741 20:05:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.642 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:38.643 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:38.643 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:38.643 Found net devices under 0000:09:00.0: cvl_0_0 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:38.643 Found net devices under 0000:09:00.1: cvl_0_1 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.643 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.901 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:06:38.902 00:06:38.902 --- 10.0.0.2 ping statistics --- 00:06:38.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.902 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:06:38.902 00:06:38.902 --- 10.0.0.1 ping statistics --- 00:06:38.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.902 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=119477 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 119477 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 119477 ']' 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.902 20:05:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.902 [2024-05-16 20:05:25.891666] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:38.902 [2024-05-16 20:05:25.891755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.902 [2024-05-16 20:05:25.955408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.160 [2024-05-16 20:05:26.073401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.160 [2024-05-16 20:05:26.073453] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.160 [2024-05-16 20:05:26.073481] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.160 [2024-05-16 20:05:26.073493] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.160 [2024-05-16 20:05:26.073503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.160 [2024-05-16 20:05:26.073608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.160 [2024-05-16 20:05:26.073658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.160 [2024-05-16 20:05:26.073707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.160 [2024-05-16 20:05:26.073711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 [2024-05-16 20:05:26.231702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 [2024-05-16 20:05:26.243670] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:39.160 [2024-05-16 20:05:26.243986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:39.160 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.161 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:39.161 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.161 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:39.419 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:39.677 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.936 20:05:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:39.936 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:39.936 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:39.936 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:39.936 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:39.936 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.936 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.194 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:40.452 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:40.452 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:40.452 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:40.452 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:40.452 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.452 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:40.453 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:40.710 rmmod nvme_tcp 00:06:40.710 rmmod nvme_fabrics 00:06:40.710 rmmod nvme_keyring 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 119477 ']' 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 119477 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 119477 ']' 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 119477 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119477 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119477' 00:06:40.710 killing process with pid 119477 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 119477 00:06:40.710 [2024-05-16 20:05:27.757150] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:40.710 20:05:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 119477 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.968 20:05:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.500 20:05:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:43.500 00:06:43.500 real 0m6.438s 00:06:43.500 user 0m9.029s 00:06:43.500 sys 0m1.923s 00:06:43.500 20:05:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.500 20:05:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:43.500 ************************************ 00:06:43.500 END TEST nvmf_referrals 00:06:43.500 ************************************ 00:06:43.500 20:05:30 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:43.500 20:05:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:43.500 20:05:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.500 20:05:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.500 ************************************ 00:06:43.500 START TEST nvmf_connect_disconnect 00:06:43.500 ************************************ 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:43.500 * Looking for test storage... 00:06:43.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.500 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:43.501 20:05:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:45.401 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:45.401 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:45.401 Found net devices under 0000:09:00.0: cvl_0_0 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:45.401 Found net devices under 0000:09:00.1: cvl_0_1 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.401 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:06:45.402 00:06:45.402 --- 10.0.0.2 ping statistics --- 00:06:45.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.402 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:06:45.402 00:06:45.402 --- 10.0.0.1 ping statistics --- 00:06:45.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.402 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=121716 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 121716 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 121716 ']' 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.402 20:05:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.402 [2024-05-16 20:05:32.327769] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:45.402 [2024-05-16 20:05:32.327847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.402 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.402 [2024-05-16 20:05:32.402293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.402 [2024-05-16 20:05:32.523511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.402 [2024-05-16 20:05:32.523579] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.402 [2024-05-16 20:05:32.523595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.402 [2024-05-16 20:05:32.523610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.402 [2024-05-16 20:05:32.523621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.402 [2024-05-16 20:05:32.523719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.402 [2024-05-16 20:05:32.523784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.402 [2024-05-16 20:05:32.523878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.402 [2024-05-16 20:05:32.523883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 [2024-05-16 20:05:33.285637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 [2024-05-16 20:05:33.335833] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:46.334 [2024-05-16 20:05:33.336155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:46.334 20:05:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:48.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.746 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:59.746 rmmod nvme_tcp 00:07:00.004 rmmod nvme_fabrics 00:07:00.004 rmmod nvme_keyring 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 121716 ']' 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 121716 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 121716 ']' 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 121716 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 121716 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 121716' 00:07:00.004 killing process with pid 121716 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 121716 00:07:00.004 [2024-05-16 20:05:46.974575] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:00.004 20:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 121716 00:07:00.263 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.264 20:05:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.170 20:05:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.170 00:07:02.170 real 0m19.187s 00:07:02.170 user 0m58.374s 00:07:02.170 sys 0m3.182s 00:07:02.170 20:05:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.170 20:05:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.170 ************************************ 00:07:02.170 END TEST nvmf_connect_disconnect 00:07:02.170 ************************************ 00:07:02.428 20:05:49 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:02.428 20:05:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:02.428 20:05:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.428 20:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.428 ************************************ 00:07:02.428 START TEST nvmf_multitarget 00:07:02.428 ************************************ 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:02.428 * Looking for test storage... 00:07:02.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.428 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.429 20:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:04.333 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:04.333 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:04.333 Found net devices under 0000:09:00.0: cvl_0_0 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:04.333 Found net devices under 0000:09:00.1: cvl_0_1 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.333 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:04.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:07:04.592 00:07:04.592 --- 10.0.0.2 ping statistics --- 00:07:04.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.592 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:07:04.592 00:07:04.592 --- 10.0.0.1 ping statistics --- 00:07:04.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.592 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=125515 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 125515 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 125515 ']' 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.592 20:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.592 [2024-05-16 20:05:51.672169] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:04.592 [2024-05-16 20:05:51.672261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.851 [2024-05-16 20:05:51.740997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.851 [2024-05-16 20:05:51.859739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.851 [2024-05-16 20:05:51.859792] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.851 [2024-05-16 20:05:51.859817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.851 [2024-05-16 20:05:51.859831] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.851 [2024-05-16 20:05:51.859843] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.851 [2024-05-16 20:05:51.859916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.851 [2024-05-16 20:05:51.859993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.851 [2024-05-16 20:05:51.860085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.851 [2024-05-16 20:05:51.860088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:05.783 "nvmf_tgt_1" 00:07:05.783 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:06.041 "nvmf_tgt_2" 00:07:06.041 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:06.041 20:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:06.041 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:06.041 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:06.041 true 00:07:06.041 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:06.299 true 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:06.299 rmmod nvme_tcp 00:07:06.299 rmmod nvme_fabrics 00:07:06.299 rmmod nvme_keyring 00:07:06.299 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 125515 ']' 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 125515 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 125515 ']' 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 125515 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125515 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125515' 00:07:06.557 killing process with pid 125515 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 125515 00:07:06.557 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 125515 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.816 20:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.750 20:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:08.750 00:07:08.750 real 0m6.431s 00:07:08.750 user 0m9.031s 00:07:08.750 sys 0m1.983s 00:07:08.750 20:05:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.750 20:05:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:08.750 ************************************ 00:07:08.750 END TEST nvmf_multitarget 00:07:08.750 ************************************ 00:07:08.750 20:05:55 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:08.750 20:05:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:08.750 20:05:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.750 20:05:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.750 ************************************ 00:07:08.750 START TEST nvmf_rpc 00:07:08.750 ************************************ 00:07:08.750 20:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:08.750 * Looking for test storage... 00:07:08.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.750 20:05:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.750 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:08.750 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.750 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.750 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.007 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:09.008 20:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:10.908 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:10.908 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:10.908 Found net devices under 0000:09:00.0: cvl_0_0 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:10.908 Found net devices under 0000:09:00.1: cvl_0_1 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:10.908 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:10.909 00:07:10.909 --- 10.0.0.2 ping statistics --- 00:07:10.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.909 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:07:10.909 00:07:10.909 --- 10.0.0.1 ping statistics --- 00:07:10.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.909 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.909 20:05:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=127643 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 127643 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 127643 ']' 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.909 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.168 [2024-05-16 20:05:58.066032] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:11.168 [2024-05-16 20:05:58.066107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.168 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.168 [2024-05-16 20:05:58.129033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.168 [2024-05-16 20:05:58.240331] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.168 [2024-05-16 20:05:58.240397] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.168 [2024-05-16 20:05:58.240411] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.168 [2024-05-16 20:05:58.240422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.168 [2024-05-16 20:05:58.240432] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.168 [2024-05-16 20:05:58.240507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.168 [2024-05-16 20:05:58.240584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.168 [2024-05-16 20:05:58.240640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.168 [2024-05-16 20:05:58.240642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:11.426 "tick_rate": 2700000000, 00:07:11.426 "poll_groups": [ 00:07:11.426 { 00:07:11.426 "name": "nvmf_tgt_poll_group_000", 00:07:11.426 "admin_qpairs": 0, 00:07:11.426 "io_qpairs": 0, 00:07:11.426 "current_admin_qpairs": 0, 00:07:11.426 "current_io_qpairs": 0, 00:07:11.426 "pending_bdev_io": 0, 00:07:11.426 "completed_nvme_io": 0, 00:07:11.426 "transports": [] 00:07:11.426 }, 00:07:11.426 { 00:07:11.426 "name": "nvmf_tgt_poll_group_001", 00:07:11.426 "admin_qpairs": 0, 00:07:11.426 "io_qpairs": 0, 00:07:11.426 "current_admin_qpairs": 0, 00:07:11.426 "current_io_qpairs": 0, 00:07:11.426 "pending_bdev_io": 0, 00:07:11.426 "completed_nvme_io": 0, 00:07:11.426 "transports": [] 00:07:11.426 }, 00:07:11.426 { 00:07:11.426 "name": "nvmf_tgt_poll_group_002", 00:07:11.426 "admin_qpairs": 0, 00:07:11.426 "io_qpairs": 0, 00:07:11.426 "current_admin_qpairs": 0, 00:07:11.426 "current_io_qpairs": 0, 00:07:11.426 "pending_bdev_io": 0, 00:07:11.426 "completed_nvme_io": 0, 00:07:11.426 "transports": [] 00:07:11.426 }, 00:07:11.426 { 00:07:11.426 "name": "nvmf_tgt_poll_group_003", 00:07:11.426 "admin_qpairs": 0, 00:07:11.426 "io_qpairs": 0, 00:07:11.426 "current_admin_qpairs": 0, 00:07:11.426 "current_io_qpairs": 0, 00:07:11.426 "pending_bdev_io": 0, 00:07:11.426 "completed_nvme_io": 0, 00:07:11.426 "transports": [] 00:07:11.426 } 00:07:11.426 ] 00:07:11.426 }' 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.426 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.426 [2024-05-16 20:05:58.480823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:11.427 "tick_rate": 2700000000, 00:07:11.427 "poll_groups": [ 00:07:11.427 { 00:07:11.427 "name": "nvmf_tgt_poll_group_000", 00:07:11.427 "admin_qpairs": 0, 00:07:11.427 "io_qpairs": 0, 00:07:11.427 "current_admin_qpairs": 0, 00:07:11.427 "current_io_qpairs": 0, 00:07:11.427 "pending_bdev_io": 0, 00:07:11.427 "completed_nvme_io": 0, 00:07:11.427 "transports": [ 00:07:11.427 { 00:07:11.427 "trtype": "TCP" 00:07:11.427 } 00:07:11.427 ] 00:07:11.427 }, 00:07:11.427 { 00:07:11.427 "name": "nvmf_tgt_poll_group_001", 00:07:11.427 "admin_qpairs": 0, 00:07:11.427 "io_qpairs": 0, 00:07:11.427 "current_admin_qpairs": 0, 00:07:11.427 "current_io_qpairs": 0, 00:07:11.427 "pending_bdev_io": 0, 00:07:11.427 "completed_nvme_io": 0, 00:07:11.427 "transports": [ 00:07:11.427 { 00:07:11.427 "trtype": "TCP" 00:07:11.427 } 00:07:11.427 ] 00:07:11.427 }, 00:07:11.427 { 00:07:11.427 "name": "nvmf_tgt_poll_group_002", 00:07:11.427 "admin_qpairs": 0, 00:07:11.427 "io_qpairs": 0, 00:07:11.427 "current_admin_qpairs": 0, 00:07:11.427 "current_io_qpairs": 0, 00:07:11.427 "pending_bdev_io": 0, 00:07:11.427 "completed_nvme_io": 0, 00:07:11.427 "transports": [ 00:07:11.427 { 00:07:11.427 "trtype": "TCP" 00:07:11.427 } 00:07:11.427 ] 00:07:11.427 }, 00:07:11.427 { 00:07:11.427 "name": "nvmf_tgt_poll_group_003", 00:07:11.427 "admin_qpairs": 0, 00:07:11.427 "io_qpairs": 0, 00:07:11.427 "current_admin_qpairs": 0, 00:07:11.427 "current_io_qpairs": 0, 00:07:11.427 "pending_bdev_io": 0, 00:07:11.427 "completed_nvme_io": 0, 00:07:11.427 "transports": [ 00:07:11.427 { 00:07:11.427 "trtype": "TCP" 00:07:11.427 } 00:07:11.427 ] 00:07:11.427 } 00:07:11.427 ] 00:07:11.427 }' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:11.427 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.685 Malloc1 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.685 [2024-05-16 20:05:58.633635] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:11.685 [2024-05-16 20:05:58.633950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:11.685 [2024-05-16 20:05:58.656368] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:11.685 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:11.685 could not add new controller: failed to write to nvme-fabrics device 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.685 20:05:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.251 20:05:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.251 20:05:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:12.251 20:05:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.251 20:05:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:12.251 20:05:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.780 [2024-05-16 20:06:01.397747] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:14.780 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:14.780 could not add new controller: failed to write to nvme-fabrics device 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.780 20:06:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.039 20:06:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.039 20:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:15.039 20:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.039 20:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:15.039 20:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:16.938 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.196 [2024-05-16 20:06:04.178786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.196 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.762 20:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.762 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:17.762 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.762 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:17.762 20:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.362 [2024-05-16 20:06:06.931632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.362 20:06:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.619 20:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.619 20:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:20.619 20:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.619 20:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:20.619 20:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:22.515 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.773 [2024-05-16 20:06:09.695990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.773 20:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.339 20:06:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.339 20:06:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:23.339 20:06:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.339 20:06:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:23.339 20:06:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:25.236 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 [2024-05-16 20:06:12.462473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.494 20:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.060 20:06:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.060 20:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:26.060 20:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.060 20:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:26.060 20:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 [2024-05-16 20:06:15.220476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.588 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:28.845 20:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:28.845 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:28.845 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:28.845 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:28.845 20:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:30.742 20:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 [2024-05-16 20:06:18.027095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 [2024-05-16 20:06:18.075171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.001 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 [2024-05-16 20:06:18.123333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.002 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 [2024-05-16 20:06:18.171525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 [2024-05-16 20:06:18.219663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.260 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:31.261 "tick_rate": 2700000000, 00:07:31.261 "poll_groups": [ 00:07:31.261 { 00:07:31.261 "name": "nvmf_tgt_poll_group_000", 00:07:31.261 "admin_qpairs": 2, 00:07:31.261 "io_qpairs": 84, 00:07:31.261 "current_admin_qpairs": 0, 00:07:31.261 "current_io_qpairs": 0, 00:07:31.261 "pending_bdev_io": 0, 00:07:31.261 "completed_nvme_io": 185, 00:07:31.261 "transports": [ 00:07:31.261 { 00:07:31.261 "trtype": "TCP" 00:07:31.261 } 00:07:31.261 ] 00:07:31.261 }, 00:07:31.261 { 00:07:31.261 "name": "nvmf_tgt_poll_group_001", 00:07:31.261 "admin_qpairs": 2, 00:07:31.261 "io_qpairs": 84, 00:07:31.261 "current_admin_qpairs": 0, 00:07:31.261 "current_io_qpairs": 0, 00:07:31.261 "pending_bdev_io": 0, 00:07:31.261 "completed_nvme_io": 184, 00:07:31.261 "transports": [ 00:07:31.261 { 00:07:31.261 "trtype": "TCP" 00:07:31.261 } 00:07:31.261 ] 00:07:31.261 }, 00:07:31.261 { 00:07:31.261 "name": "nvmf_tgt_poll_group_002", 00:07:31.261 "admin_qpairs": 1, 00:07:31.261 "io_qpairs": 84, 00:07:31.261 "current_admin_qpairs": 0, 00:07:31.261 "current_io_qpairs": 0, 00:07:31.261 "pending_bdev_io": 0, 00:07:31.261 "completed_nvme_io": 134, 00:07:31.261 "transports": [ 00:07:31.261 { 00:07:31.261 "trtype": "TCP" 00:07:31.261 } 00:07:31.261 ] 00:07:31.261 }, 00:07:31.261 { 00:07:31.261 "name": "nvmf_tgt_poll_group_003", 00:07:31.261 "admin_qpairs": 2, 00:07:31.261 "io_qpairs": 84, 00:07:31.261 "current_admin_qpairs": 0, 00:07:31.261 "current_io_qpairs": 0, 00:07:31.261 "pending_bdev_io": 0, 00:07:31.261 "completed_nvme_io": 183, 00:07:31.261 "transports": [ 00:07:31.261 { 00:07:31.261 "trtype": "TCP" 00:07:31.261 } 00:07:31.261 ] 00:07:31.261 } 00:07:31.261 ] 00:07:31.261 }' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.261 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.261 rmmod nvme_tcp 00:07:31.261 rmmod nvme_fabrics 00:07:31.261 rmmod nvme_keyring 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 127643 ']' 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 127643 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 127643 ']' 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 127643 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127643 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127643' 00:07:31.519 killing process with pid 127643 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 127643 00:07:31.519 [2024-05-16 20:06:18.444118] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:31.519 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 127643 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.778 20:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.700 20:06:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.700 00:07:33.700 real 0m24.955s 00:07:33.700 user 1m20.987s 00:07:33.700 sys 0m3.820s 00:07:33.700 20:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.700 20:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.700 ************************************ 00:07:33.700 END TEST nvmf_rpc 00:07:33.700 ************************************ 00:07:33.700 20:06:20 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:33.700 20:06:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:33.700 20:06:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.700 20:06:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.959 ************************************ 00:07:33.959 START TEST nvmf_invalid 00:07:33.959 ************************************ 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:33.959 * Looking for test storage... 00:07:33.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:33.959 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.960 20:06:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.863 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:35.864 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:35.864 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:35.864 Found net devices under 0000:09:00.0: cvl_0_0 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:35.864 Found net devices under 0000:09:00.1: cvl_0_1 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:07:35.864 00:07:35.864 --- 10.0.0.2 ping statistics --- 00:07:35.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.864 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:07:35.864 00:07:35.864 --- 10.0.0.1 ping statistics --- 00:07:35.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.864 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=132751 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 132751 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 132751 ']' 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.864 20:06:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:36.123 [2024-05-16 20:06:23.010542] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:36.123 [2024-05-16 20:06:23.010618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.123 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.123 [2024-05-16 20:06:23.074814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.123 [2024-05-16 20:06:23.187996] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.123 [2024-05-16 20:06:23.188051] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.123 [2024-05-16 20:06:23.188067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.123 [2024-05-16 20:06:23.188080] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.123 [2024-05-16 20:06:23.188091] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.123 [2024-05-16 20:06:23.188189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.123 [2024-05-16 20:06:23.188248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.123 [2024-05-16 20:06:23.188314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.123 [2024-05-16 20:06:23.188317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:36.381 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4776 00:07:36.645 [2024-05-16 20:06:23.593479] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:36.645 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:36.645 { 00:07:36.645 "nqn": "nqn.2016-06.io.spdk:cnode4776", 00:07:36.645 "tgt_name": "foobar", 00:07:36.645 "method": "nvmf_create_subsystem", 00:07:36.645 "req_id": 1 00:07:36.646 } 00:07:36.646 Got JSON-RPC error response 00:07:36.646 response: 00:07:36.646 { 00:07:36.646 "code": -32603, 00:07:36.646 "message": "Unable to find target foobar" 00:07:36.646 }' 00:07:36.646 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:36.646 { 00:07:36.646 "nqn": "nqn.2016-06.io.spdk:cnode4776", 00:07:36.646 "tgt_name": "foobar", 00:07:36.646 "method": "nvmf_create_subsystem", 00:07:36.646 "req_id": 1 00:07:36.646 } 00:07:36.646 Got JSON-RPC error response 00:07:36.646 response: 00:07:36.646 { 00:07:36.646 "code": -32603, 00:07:36.646 "message": "Unable to find target foobar" 00:07:36.646 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:36.646 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:36.646 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7965 00:07:36.906 [2024-05-16 20:06:23.826278] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7965: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:36.906 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:36.906 { 00:07:36.906 "nqn": "nqn.2016-06.io.spdk:cnode7965", 00:07:36.906 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:36.906 "method": "nvmf_create_subsystem", 00:07:36.906 "req_id": 1 00:07:36.906 } 00:07:36.906 Got JSON-RPC error response 00:07:36.906 response: 00:07:36.906 { 00:07:36.906 "code": -32602, 00:07:36.906 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:36.906 }' 00:07:36.906 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:36.906 { 00:07:36.906 "nqn": "nqn.2016-06.io.spdk:cnode7965", 00:07:36.906 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:36.906 "method": "nvmf_create_subsystem", 00:07:36.906 "req_id": 1 00:07:36.906 } 00:07:36.906 Got JSON-RPC error response 00:07:36.906 response: 00:07:36.906 { 00:07:36.906 "code": -32602, 00:07:36.906 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:36.906 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:36.906 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:36.906 20:06:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10057 00:07:37.165 [2024-05-16 20:06:24.071051] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10057: invalid model number 'SPDK_Controller' 00:07:37.165 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:37.165 { 00:07:37.165 "nqn": "nqn.2016-06.io.spdk:cnode10057", 00:07:37.165 "model_number": "SPDK_Controller\u001f", 00:07:37.165 "method": "nvmf_create_subsystem", 00:07:37.165 "req_id": 1 00:07:37.165 } 00:07:37.165 Got JSON-RPC error response 00:07:37.165 response: 00:07:37.165 { 00:07:37.165 "code": -32602, 00:07:37.165 "message": "Invalid MN SPDK_Controller\u001f" 00:07:37.165 }' 00:07:37.165 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:37.165 { 00:07:37.165 "nqn": "nqn.2016-06.io.spdk:cnode10057", 00:07:37.165 "model_number": "SPDK_Controller\u001f", 00:07:37.165 "method": "nvmf_create_subsystem", 00:07:37.165 "req_id": 1 00:07:37.165 } 00:07:37.166 Got JSON-RPC error response 00:07:37.166 response: 00:07:37.166 { 00:07:37.166 "code": -32602, 00:07:37.166 "message": "Invalid MN SPDK_Controller\u001f" 00:07:37.166 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.166 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U9A qlw4Y[R=u@#8V;(rJ' 00:07:37.167 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U9A qlw4Y[R=u@#8V;(rJ' nqn.2016-06.io.spdk:cnode12282 00:07:37.426 [2024-05-16 20:06:24.436357] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12282: invalid serial number 'U9A qlw4Y[R=u@#8V;(rJ' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:37.426 { 00:07:37.426 "nqn": "nqn.2016-06.io.spdk:cnode12282", 00:07:37.426 "serial_number": "U9A qlw4Y[R=u@#8V;(rJ", 00:07:37.426 "method": "nvmf_create_subsystem", 00:07:37.426 "req_id": 1 00:07:37.426 } 00:07:37.426 Got JSON-RPC error response 00:07:37.426 response: 00:07:37.426 { 00:07:37.426 "code": -32602, 00:07:37.426 "message": "Invalid SN U9A qlw4Y[R=u@#8V;(rJ" 00:07:37.426 }' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:37.426 { 00:07:37.426 "nqn": "nqn.2016-06.io.spdk:cnode12282", 00:07:37.426 "serial_number": "U9A qlw4Y[R=u@#8V;(rJ", 00:07:37.426 "method": "nvmf_create_subsystem", 00:07:37.426 "req_id": 1 00:07:37.426 } 00:07:37.426 Got JSON-RPC error response 00:07:37.426 response: 00:07:37.426 { 00:07:37.426 "code": -32602, 00:07:37.426 "message": "Invalid SN U9A qlw4Y[R=u@#8V;(rJ" 00:07:37.426 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.426 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.427 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '4tGXY;/Y-qAC!-@Zb/| GtJwW@Jn@{UpC~LhkVU/' 00:07:37.686 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '4tGXY;/Y-qAC!-@Zb/| GtJwW@Jn@{UpC~LhkVU/' nqn.2016-06.io.spdk:cnode24412 00:07:37.945 [2024-05-16 20:06:24.845673] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24412: invalid model number '4tGXY;/Y-qAC!-@Zb/| GtJwW@Jn@{UpC~LhkVU/' 00:07:37.945 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:37.945 { 00:07:37.945 "nqn": "nqn.2016-06.io.spdk:cnode24412", 00:07:37.945 "model_number": "4tGXY;/Y-qAC!-@Zb/| GtJwW@\u007fJn@{UpC~LhkVU/", 00:07:37.945 "method": "nvmf_create_subsystem", 00:07:37.945 "req_id": 1 00:07:37.945 } 00:07:37.945 Got JSON-RPC error response 00:07:37.945 response: 00:07:37.945 { 00:07:37.945 "code": -32602, 00:07:37.945 "message": "Invalid MN 4tGXY;/Y-qAC!-@Zb/| GtJwW@\u007fJn@{UpC~LhkVU/" 00:07:37.945 }' 00:07:37.945 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:37.945 { 00:07:37.945 "nqn": "nqn.2016-06.io.spdk:cnode24412", 00:07:37.945 "model_number": "4tGXY;/Y-qAC!-@Zb/| GtJwW@\u007fJn@{UpC~LhkVU/", 00:07:37.945 "method": "nvmf_create_subsystem", 00:07:37.945 "req_id": 1 00:07:37.945 } 00:07:37.945 Got JSON-RPC error response 00:07:37.945 response: 00:07:37.945 { 00:07:37.945 "code": -32602, 00:07:37.945 "message": "Invalid MN 4tGXY;/Y-qAC!-@Zb/| GtJwW@\u007fJn@{UpC~LhkVU/" 00:07:37.945 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:37.945 20:06:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:37.945 [2024-05-16 20:06:25.078524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.203 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:38.461 [2024-05-16 20:06:25.584111] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:38.461 [2024-05-16 20:06:25.584216] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:38.461 { 00:07:38.461 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:38.461 "listen_address": { 00:07:38.461 "trtype": "tcp", 00:07:38.461 "traddr": "", 00:07:38.461 "trsvcid": "4421" 00:07:38.461 }, 00:07:38.461 "method": "nvmf_subsystem_remove_listener", 00:07:38.461 "req_id": 1 00:07:38.461 } 00:07:38.461 Got JSON-RPC error response 00:07:38.461 response: 00:07:38.461 { 00:07:38.461 "code": -32602, 00:07:38.461 "message": "Invalid parameters" 00:07:38.461 }' 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:38.461 { 00:07:38.461 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:38.461 "listen_address": { 00:07:38.461 "trtype": "tcp", 00:07:38.461 "traddr": "", 00:07:38.461 "trsvcid": "4421" 00:07:38.461 }, 00:07:38.461 "method": "nvmf_subsystem_remove_listener", 00:07:38.461 "req_id": 1 00:07:38.461 } 00:07:38.461 Got JSON-RPC error response 00:07:38.461 response: 00:07:38.461 { 00:07:38.461 "code": -32602, 00:07:38.461 "message": "Invalid parameters" 00:07:38.461 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:38.461 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3143 -i 0 00:07:38.719 [2024-05-16 20:06:25.840985] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3143: invalid cntlid range [0-65519] 00:07:38.719 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:38.719 { 00:07:38.719 "nqn": "nqn.2016-06.io.spdk:cnode3143", 00:07:38.719 "min_cntlid": 0, 00:07:38.719 "method": "nvmf_create_subsystem", 00:07:38.719 "req_id": 1 00:07:38.719 } 00:07:38.719 Got JSON-RPC error response 00:07:38.719 response: 00:07:38.719 { 00:07:38.719 "code": -32602, 00:07:38.719 "message": "Invalid cntlid range [0-65519]" 00:07:38.719 }' 00:07:38.719 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:38.719 { 00:07:38.719 "nqn": "nqn.2016-06.io.spdk:cnode3143", 00:07:38.719 "min_cntlid": 0, 00:07:38.719 "method": "nvmf_create_subsystem", 00:07:38.719 "req_id": 1 00:07:38.719 } 00:07:38.719 Got JSON-RPC error response 00:07:38.719 response: 00:07:38.719 { 00:07:38.719 "code": -32602, 00:07:38.719 "message": "Invalid cntlid range [0-65519]" 00:07:38.719 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:38.719 20:06:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1363 -i 65520 00:07:38.977 [2024-05-16 20:06:26.085805] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1363: invalid cntlid range [65520-65519] 00:07:38.977 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:38.977 { 00:07:38.977 "nqn": "nqn.2016-06.io.spdk:cnode1363", 00:07:38.977 "min_cntlid": 65520, 00:07:38.977 "method": "nvmf_create_subsystem", 00:07:38.977 "req_id": 1 00:07:38.977 } 00:07:38.977 Got JSON-RPC error response 00:07:38.977 response: 00:07:38.977 { 00:07:38.977 "code": -32602, 00:07:38.977 "message": "Invalid cntlid range [65520-65519]" 00:07:38.977 }' 00:07:38.977 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:38.977 { 00:07:38.977 "nqn": "nqn.2016-06.io.spdk:cnode1363", 00:07:38.977 "min_cntlid": 65520, 00:07:38.977 "method": "nvmf_create_subsystem", 00:07:38.977 "req_id": 1 00:07:38.977 } 00:07:38.977 Got JSON-RPC error response 00:07:38.977 response: 00:07:38.977 { 00:07:38.977 "code": -32602, 00:07:38.977 "message": "Invalid cntlid range [65520-65519]" 00:07:38.977 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:38.977 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19576 -I 0 00:07:39.236 [2024-05-16 20:06:26.330672] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19576: invalid cntlid range [1-0] 00:07:39.236 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:39.236 { 00:07:39.236 "nqn": "nqn.2016-06.io.spdk:cnode19576", 00:07:39.236 "max_cntlid": 0, 00:07:39.236 "method": "nvmf_create_subsystem", 00:07:39.236 "req_id": 1 00:07:39.236 } 00:07:39.236 Got JSON-RPC error response 00:07:39.236 response: 00:07:39.236 { 00:07:39.236 "code": -32602, 00:07:39.236 "message": "Invalid cntlid range [1-0]" 00:07:39.236 }' 00:07:39.236 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:39.236 { 00:07:39.236 "nqn": "nqn.2016-06.io.spdk:cnode19576", 00:07:39.236 "max_cntlid": 0, 00:07:39.236 "method": "nvmf_create_subsystem", 00:07:39.236 "req_id": 1 00:07:39.236 } 00:07:39.236 Got JSON-RPC error response 00:07:39.236 response: 00:07:39.236 { 00:07:39.236 "code": -32602, 00:07:39.236 "message": "Invalid cntlid range [1-0]" 00:07:39.236 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:39.236 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17960 -I 65520 00:07:39.494 [2024-05-16 20:06:26.575479] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17960: invalid cntlid range [1-65520] 00:07:39.494 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:39.494 { 00:07:39.494 "nqn": "nqn.2016-06.io.spdk:cnode17960", 00:07:39.494 "max_cntlid": 65520, 00:07:39.494 "method": "nvmf_create_subsystem", 00:07:39.494 "req_id": 1 00:07:39.494 } 00:07:39.494 Got JSON-RPC error response 00:07:39.494 response: 00:07:39.494 { 00:07:39.494 "code": -32602, 00:07:39.494 "message": "Invalid cntlid range [1-65520]" 00:07:39.494 }' 00:07:39.494 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:39.494 { 00:07:39.494 "nqn": "nqn.2016-06.io.spdk:cnode17960", 00:07:39.494 "max_cntlid": 65520, 00:07:39.494 "method": "nvmf_create_subsystem", 00:07:39.494 "req_id": 1 00:07:39.494 } 00:07:39.494 Got JSON-RPC error response 00:07:39.494 response: 00:07:39.494 { 00:07:39.494 "code": -32602, 00:07:39.494 "message": "Invalid cntlid range [1-65520]" 00:07:39.494 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:39.494 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22477 -i 6 -I 5 00:07:39.752 [2024-05-16 20:06:26.840360] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22477: invalid cntlid range [6-5] 00:07:39.752 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:39.752 { 00:07:39.752 "nqn": "nqn.2016-06.io.spdk:cnode22477", 00:07:39.752 "min_cntlid": 6, 00:07:39.752 "max_cntlid": 5, 00:07:39.752 "method": "nvmf_create_subsystem", 00:07:39.752 "req_id": 1 00:07:39.752 } 00:07:39.752 Got JSON-RPC error response 00:07:39.752 response: 00:07:39.752 { 00:07:39.752 "code": -32602, 00:07:39.752 "message": "Invalid cntlid range [6-5]" 00:07:39.752 }' 00:07:39.752 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:39.752 { 00:07:39.752 "nqn": "nqn.2016-06.io.spdk:cnode22477", 00:07:39.752 "min_cntlid": 6, 00:07:39.752 "max_cntlid": 5, 00:07:39.752 "method": "nvmf_create_subsystem", 00:07:39.752 "req_id": 1 00:07:39.752 } 00:07:39.752 Got JSON-RPC error response 00:07:39.752 response: 00:07:39.752 { 00:07:39.752 "code": -32602, 00:07:39.752 "message": "Invalid cntlid range [6-5]" 00:07:39.752 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:39.752 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:40.010 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:40.010 { 00:07:40.010 "name": "foobar", 00:07:40.010 "method": "nvmf_delete_target", 00:07:40.010 "req_id": 1 00:07:40.010 } 00:07:40.010 Got JSON-RPC error response 00:07:40.010 response: 00:07:40.010 { 00:07:40.010 "code": -32602, 00:07:40.010 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:40.010 }' 00:07:40.010 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:40.010 { 00:07:40.010 "name": "foobar", 00:07:40.010 "method": "nvmf_delete_target", 00:07:40.010 "req_id": 1 00:07:40.010 } 00:07:40.010 Got JSON-RPC error response 00:07:40.010 response: 00:07:40.010 { 00:07:40.011 "code": -32602, 00:07:40.011 "message": "The specified target doesn't exist, cannot delete it." 00:07:40.011 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.011 20:06:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.011 rmmod nvme_tcp 00:07:40.011 rmmod nvme_fabrics 00:07:40.011 rmmod nvme_keyring 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 132751 ']' 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 132751 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 132751 ']' 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 132751 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132751 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132751' 00:07:40.011 killing process with pid 132751 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 132751 00:07:40.011 [2024-05-16 20:06:27.061010] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:40.011 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 132751 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.270 20:06:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.806 20:06:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.806 00:07:42.806 real 0m8.506s 00:07:42.806 user 0m19.942s 00:07:42.806 sys 0m2.361s 00:07:42.806 20:06:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.806 20:06:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.806 ************************************ 00:07:42.806 END TEST nvmf_invalid 00:07:42.806 ************************************ 00:07:42.806 20:06:29 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:42.806 20:06:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:42.806 20:06:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.806 20:06:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.806 ************************************ 00:07:42.806 START TEST nvmf_abort 00:07:42.806 ************************************ 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:42.806 * Looking for test storage... 00:07:42.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.806 20:06:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.807 20:06:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:44.711 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:44.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:44.711 Found net devices under 0000:09:00.0: cvl_0_0 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:44.711 Found net devices under 0000:09:00.1: cvl_0_1 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:07:44.711 00:07:44.711 --- 10.0.0.2 ping statistics --- 00:07:44.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.711 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:07:44.711 00:07:44.711 --- 10.0.0.1 ping statistics --- 00:07:44.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.711 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.711 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=135390 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 135390 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 135390 ']' 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.712 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.712 [2024-05-16 20:06:31.581055] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:44.712 [2024-05-16 20:06:31.581149] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.712 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.712 [2024-05-16 20:06:31.646396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.712 [2024-05-16 20:06:31.757737] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.712 [2024-05-16 20:06:31.757801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.712 [2024-05-16 20:06:31.757831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.712 [2024-05-16 20:06:31.757843] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.712 [2024-05-16 20:06:31.757860] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.712 [2024-05-16 20:06:31.757933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.712 [2024-05-16 20:06:31.757990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.712 [2024-05-16 20:06:31.757993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 [2024-05-16 20:06:31.904485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 Malloc0 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 Delay0 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 [2024-05-16 20:06:31.976519] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:44.970 [2024-05-16 20:06:31.976804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.970 20:06:31 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:44.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.229 [2024-05-16 20:06:32.121937] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:47.137 Initializing NVMe Controllers 00:07:47.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:47.137 controller IO queue size 128 less than required 00:07:47.137 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:47.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:47.137 Initialization complete. Launching workers. 00:07:47.137 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35196 00:07:47.137 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35257, failed to submit 62 00:07:47.137 success 35200, unsuccess 57, failed 0 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.137 rmmod nvme_tcp 00:07:47.137 rmmod nvme_fabrics 00:07:47.137 rmmod nvme_keyring 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 135390 ']' 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 135390 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 135390 ']' 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 135390 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135390 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135390' 00:07:47.137 killing process with pid 135390 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 135390 00:07:47.137 [2024-05-16 20:06:34.248241] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:47.137 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 135390 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.703 20:06:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.608 20:06:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.608 00:07:49.608 real 0m7.185s 00:07:49.608 user 0m10.727s 00:07:49.608 sys 0m2.200s 00:07:49.608 20:06:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.608 20:06:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.608 ************************************ 00:07:49.608 END TEST nvmf_abort 00:07:49.608 ************************************ 00:07:49.608 20:06:36 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:49.608 20:06:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:49.608 20:06:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.608 20:06:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.608 ************************************ 00:07:49.608 START TEST nvmf_ns_hotplug_stress 00:07:49.608 ************************************ 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:49.608 * Looking for test storage... 00:07:49.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.608 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.609 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.512 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.512 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.512 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.513 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:51.772 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:51.772 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:51.772 Found net devices under 0000:09:00.0: cvl_0_0 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:51.772 Found net devices under 0000:09:00.1: cvl_0_1 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:51.772 00:07:51.772 --- 10.0.0.2 ping statistics --- 00:07:51.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.772 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:07:51.772 00:07:51.772 --- 10.0.0.1 ping statistics --- 00:07:51.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.772 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.772 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=137610 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 137610 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 137610 ']' 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.773 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.773 [2024-05-16 20:06:38.863664] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:51.773 [2024-05-16 20:06:38.863742] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.773 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.031 [2024-05-16 20:06:38.933103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.031 [2024-05-16 20:06:39.049784] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.031 [2024-05-16 20:06:39.049850] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.031 [2024-05-16 20:06:39.049877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.031 [2024-05-16 20:06:39.049890] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.031 [2024-05-16 20:06:39.049902] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.031 [2024-05-16 20:06:39.049989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.031 [2024-05-16 20:06:39.050083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.031 [2024-05-16 20:06:39.050086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.962 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.962 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:07:52.962 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.962 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.962 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.962 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.963 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:52.963 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:52.963 [2024-05-16 20:06:40.052303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.963 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.220 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.478 [2024-05-16 20:06:40.573879] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:53.478 [2024-05-16 20:06:40.574181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.478 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.736 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:53.994 Malloc0 00:07:53.994 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:54.251 Delay0 00:07:54.251 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.509 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:54.767 NULL1 00:07:54.767 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:55.025 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=138033 00:07:55.025 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:55.025 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:07:55.025 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.398 Read completed with error (sct=0, sc=11) 00:07:56.398 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.656 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:56.656 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:56.914 true 00:07:56.914 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:07:56.914 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.479 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.737 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:57.737 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:57.994 true 00:07:57.994 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:07:57.994 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.251 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.508 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:58.508 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:58.766 true 00:07:58.766 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:07:58.766 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.697 20:06:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.953 20:06:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:59.953 20:06:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:00.210 true 00:08:00.210 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:00.210 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.467 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.724 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:00.724 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:00.982 true 00:08:00.982 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:00.982 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.914 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.173 20:06:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:02.173 20:06:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:02.430 true 00:08:02.430 20:06:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:02.430 20:06:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.687 20:06:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.944 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:02.944 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:03.201 true 00:08:03.201 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:03.201 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.132 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.389 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:04.389 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:04.646 true 00:08:04.646 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:04.646 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.904 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.161 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:05.161 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:05.419 true 00:08:05.419 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:05.419 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.676 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.934 20:06:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:05.934 20:06:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:06.192 true 00:08:06.192 20:06:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:06.192 20:06:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.562 Message suppressed 999 times: 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.562 Read completed with error (sct=0, sc=11) 00:08:07.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.562 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:07.562 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:07.819 true 00:08:07.819 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:07.819 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.749 20:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.749 20:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:08.749 20:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:09.007 true 00:08:09.007 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:09.007 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.264 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.522 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:09.522 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:09.780 true 00:08:09.780 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:09.780 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.713 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.971 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:10.971 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:11.228 true 00:08:11.228 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:11.228 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.486 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.744 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:11.744 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:12.002 true 00:08:12.002 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:12.002 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.260 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.518 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:12.518 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:12.775 true 00:08:12.775 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:12.775 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.707 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.965 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:13.965 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:14.222 true 00:08:14.222 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:14.222 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.156 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.156 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:15.156 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:15.414 true 00:08:15.414 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:15.414 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.671 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.928 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:15.928 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:16.186 true 00:08:16.186 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:16.186 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.118 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.376 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:17.376 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:17.634 true 00:08:17.634 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:17.634 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.892 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.149 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:18.149 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:18.407 true 00:08:18.407 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:18.407 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.340 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.598 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:19.598 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:19.856 true 00:08:19.856 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:19.856 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.115 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.372 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:20.372 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:20.630 true 00:08:20.630 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:20.630 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.888 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.146 20:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:21.146 20:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:21.404 true 00:08:21.404 20:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:21.404 20:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.595 20:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.595 20:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:22.595 20:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:22.851 true 00:08:22.851 20:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:22.851 20:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.108 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.365 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:23.365 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:23.622 true 00:08:23.880 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:23.880 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.812 20:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.812 20:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:24.812 20:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:25.070 true 00:08:25.070 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:25.070 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.328 Initializing NVMe Controllers 00:08:25.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.328 Controller IO queue size 128, less than required. 00:08:25.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.328 Controller IO queue size 128, less than required. 00:08:25.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:25.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:25.328 Initialization complete. Launching workers. 00:08:25.328 ======================================================== 00:08:25.328 Latency(us) 00:08:25.328 Device Information : IOPS MiB/s Average min max 00:08:25.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1317.91 0.64 52558.91 3383.00 1013336.44 00:08:25.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11337.14 5.54 11289.98 2788.01 452055.11 00:08:25.328 ======================================================== 00:08:25.328 Total : 12655.05 6.18 15587.77 2788.01 1013336.44 00:08:25.328 00:08:25.328 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.586 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:25.586 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:25.845 true 00:08:25.845 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 138033 00:08:25.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (138033) - No such process 00:08:25.845 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 138033 00:08:25.845 20:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.103 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.361 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:26.361 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:26.361 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:26.361 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.361 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:26.619 null0 00:08:26.619 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.619 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.619 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:26.876 null1 00:08:26.876 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.876 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.876 20:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:27.134 null2 00:08:27.134 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.135 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.135 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:27.392 null3 00:08:27.392 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.392 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.392 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:27.649 null4 00:08:27.649 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.649 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.649 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:27.907 null5 00:08:27.907 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.907 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.907 20:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:28.165 null6 00:08:28.165 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.165 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.165 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:28.165 null7 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 142080 142081 142083 142085 142087 142089 142091 142093 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.423 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.681 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.939 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.940 20:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.197 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.713 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.972 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.973 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.973 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.973 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.973 20:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.231 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.489 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.748 20:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.006 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.264 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.523 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.781 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.781 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.781 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.781 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.782 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.782 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.782 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.782 20:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.040 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.298 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.298 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.298 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.557 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.557 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.557 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.557 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.557 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.073 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.073 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.073 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.073 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.073 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.073 20:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.073 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.073 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.331 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.589 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.848 rmmod nvme_tcp 00:08:33.848 rmmod nvme_fabrics 00:08:33.848 rmmod nvme_keyring 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 137610 ']' 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 137610 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 137610 ']' 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 137610 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137610 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137610' 00:08:33.848 killing process with pid 137610 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 137610 00:08:33.848 [2024-05-16 20:07:20.903479] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:33.848 20:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 137610 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.107 20:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.648 20:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.648 00:08:36.648 real 0m46.559s 00:08:36.648 user 3m31.851s 00:08:36.648 sys 0m16.000s 00:08:36.648 20:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.648 20:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.648 ************************************ 00:08:36.648 END TEST nvmf_ns_hotplug_stress 00:08:36.648 ************************************ 00:08:36.648 20:07:23 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:36.648 20:07:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:36.648 20:07:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.648 20:07:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.648 ************************************ 00:08:36.648 START TEST nvmf_connect_stress 00:08:36.648 ************************************ 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:36.648 * Looking for test storage... 00:08:36.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.648 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:38.546 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:38.546 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:38.546 Found net devices under 0000:09:00.0: cvl_0_0 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:38.546 Found net devices under 0000:09:00.1: cvl_0_1 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:38.546 00:08:38.546 --- 10.0.0.2 ping statistics --- 00:08:38.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.546 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:08:38.546 00:08:38.546 --- 10.0.0.1 ping statistics --- 00:08:38.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.546 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=144841 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 144841 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 144841 ']' 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:38.546 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.546 [2024-05-16 20:07:25.510658] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:38.546 [2024-05-16 20:07:25.510737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.546 [2024-05-16 20:07:25.579012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.803 [2024-05-16 20:07:25.695859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.803 [2024-05-16 20:07:25.695917] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.803 [2024-05-16 20:07:25.695934] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.804 [2024-05-16 20:07:25.695947] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.804 [2024-05-16 20:07:25.695958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.804 [2024-05-16 20:07:25.696036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.804 [2024-05-16 20:07:25.696134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.804 [2024-05-16 20:07:25.696138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 [2024-05-16 20:07:26.484059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.368 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 [2024-05-16 20:07:26.501139] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:39.626 [2024-05-16 20:07:26.518992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.626 NULL1 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=144995 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.626 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.627 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.885 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.885 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:39.885 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.885 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.885 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.143 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.144 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:40.144 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.144 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.144 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.402 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.402 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:40.402 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.402 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.402 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.970 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.970 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:40.970 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.970 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.970 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.228 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.228 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:41.228 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.228 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.228 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.488 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.488 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:41.488 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.488 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.488 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.745 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.745 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:41.745 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.746 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.746 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.311 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.311 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:42.311 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.311 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.311 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.568 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.568 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:42.568 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.568 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.568 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.826 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.826 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:42.826 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.826 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.826 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.084 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.084 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:43.084 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.084 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.084 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.342 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.342 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:43.342 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.342 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.342 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.908 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.908 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:43.908 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.908 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.908 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.166 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.166 20:07:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:44.166 20:07:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.166 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.166 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.424 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.424 20:07:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:44.424 20:07:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.424 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.424 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.682 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.682 20:07:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:44.682 20:07:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.682 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.682 20:07:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.940 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.940 20:07:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:44.940 20:07:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.940 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.940 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.507 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.507 20:07:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:45.507 20:07:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.507 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.507 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.765 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.765 20:07:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:45.765 20:07:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.765 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.765 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.024 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.024 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:46.024 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.024 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.024 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.281 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.282 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:46.282 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.282 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.282 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.539 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:46.539 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.539 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.539 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.103 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:47.103 20:07:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.103 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.103 20:07:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.361 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.361 20:07:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:47.361 20:07:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.361 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.361 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.619 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.619 20:07:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:47.619 20:07:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.619 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.619 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.877 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.877 20:07:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:47.877 20:07:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.877 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.877 20:07:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.135 20:07:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:48.135 20:07:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.135 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.135 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.702 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.702 20:07:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:48.702 20:07:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.702 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.702 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.969 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.969 20:07:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:48.969 20:07:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.969 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.969 20:07:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.227 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:49.227 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.227 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.227 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.485 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.485 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:49.485 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.485 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.485 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.743 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 144995 00:08:49.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (144995) - No such process 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 144995 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.743 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.743 rmmod nvme_tcp 00:08:50.002 rmmod nvme_fabrics 00:08:50.002 rmmod nvme_keyring 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 144841 ']' 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 144841 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 144841 ']' 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 144841 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144841 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144841' 00:08:50.002 killing process with pid 144841 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 144841 00:08:50.002 [2024-05-16 20:07:36.954044] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:50.002 20:07:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 144841 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.261 20:07:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.164 20:07:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.164 00:08:52.164 real 0m16.006s 00:08:52.164 user 0m41.809s 00:08:52.164 sys 0m4.802s 00:08:52.164 20:07:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.164 20:07:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.164 ************************************ 00:08:52.164 END TEST nvmf_connect_stress 00:08:52.164 ************************************ 00:08:52.164 20:07:39 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:52.164 20:07:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:52.164 20:07:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.164 20:07:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.423 ************************************ 00:08:52.423 START TEST nvmf_fused_ordering 00:08:52.423 ************************************ 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:52.423 * Looking for test storage... 00:08:52.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.423 20:07:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:54.324 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:54.324 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:54.324 Found net devices under 0000:09:00.0: cvl_0_0 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:54.324 Found net devices under 0000:09:00.1: cvl_0_1 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.324 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:08:54.325 00:08:54.325 --- 10.0.0.2 ping statistics --- 00:08:54.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.325 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:08:54.325 00:08:54.325 --- 10.0.0.1 ping statistics --- 00:08:54.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.325 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=148148 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 148148 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 148148 ']' 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:54.325 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.325 [2024-05-16 20:07:41.456258] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:54.325 [2024-05-16 20:07:41.456333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.584 [2024-05-16 20:07:41.519978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.584 [2024-05-16 20:07:41.630177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.584 [2024-05-16 20:07:41.630256] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.584 [2024-05-16 20:07:41.630270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.584 [2024-05-16 20:07:41.630296] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.584 [2024-05-16 20:07:41.630306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.584 [2024-05-16 20:07:41.630341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 [2024-05-16 20:07:41.780428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 [2024-05-16 20:07:41.796391] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:54.843 [2024-05-16 20:07:41.796646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 NULL1 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.843 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:54.843 [2024-05-16 20:07:41.841597] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:54.843 [2024-05-16 20:07:41.841639] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148168 ] 00:08:54.843 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.409 Attached to nqn.2016-06.io.spdk:cnode1 00:08:55.409 Namespace ID: 1 size: 1GB 00:08:55.409 fused_ordering(0) 00:08:55.409 fused_ordering(1) 00:08:55.409 fused_ordering(2) 00:08:55.409 fused_ordering(3) 00:08:55.409 fused_ordering(4) 00:08:55.409 fused_ordering(5) 00:08:55.409 fused_ordering(6) 00:08:55.409 fused_ordering(7) 00:08:55.409 fused_ordering(8) 00:08:55.409 fused_ordering(9) 00:08:55.410 fused_ordering(10) 00:08:55.410 fused_ordering(11) 00:08:55.410 fused_ordering(12) 00:08:55.410 fused_ordering(13) 00:08:55.410 fused_ordering(14) 00:08:55.410 fused_ordering(15) 00:08:55.410 fused_ordering(16) 00:08:55.410 fused_ordering(17) 00:08:55.410 fused_ordering(18) 00:08:55.410 fused_ordering(19) 00:08:55.410 fused_ordering(20) 00:08:55.410 fused_ordering(21) 00:08:55.410 fused_ordering(22) 00:08:55.410 fused_ordering(23) 00:08:55.410 fused_ordering(24) 00:08:55.410 fused_ordering(25) 00:08:55.410 fused_ordering(26) 00:08:55.410 fused_ordering(27) 00:08:55.410 fused_ordering(28) 00:08:55.410 fused_ordering(29) 00:08:55.410 fused_ordering(30) 00:08:55.410 fused_ordering(31) 00:08:55.410 fused_ordering(32) 00:08:55.410 fused_ordering(33) 00:08:55.410 fused_ordering(34) 00:08:55.410 fused_ordering(35) 00:08:55.410 fused_ordering(36) 00:08:55.410 fused_ordering(37) 00:08:55.410 fused_ordering(38) 00:08:55.410 fused_ordering(39) 00:08:55.410 fused_ordering(40) 00:08:55.410 fused_ordering(41) 00:08:55.410 fused_ordering(42) 00:08:55.410 fused_ordering(43) 00:08:55.410 fused_ordering(44) 00:08:55.410 fused_ordering(45) 00:08:55.410 fused_ordering(46) 00:08:55.410 fused_ordering(47) 00:08:55.410 fused_ordering(48) 00:08:55.410 fused_ordering(49) 00:08:55.410 fused_ordering(50) 00:08:55.410 fused_ordering(51) 00:08:55.410 fused_ordering(52) 00:08:55.410 fused_ordering(53) 00:08:55.410 fused_ordering(54) 00:08:55.410 fused_ordering(55) 00:08:55.410 fused_ordering(56) 00:08:55.410 fused_ordering(57) 00:08:55.410 fused_ordering(58) 00:08:55.410 fused_ordering(59) 00:08:55.410 fused_ordering(60) 00:08:55.410 fused_ordering(61) 00:08:55.410 fused_ordering(62) 00:08:55.410 fused_ordering(63) 00:08:55.410 fused_ordering(64) 00:08:55.410 fused_ordering(65) 00:08:55.410 fused_ordering(66) 00:08:55.410 fused_ordering(67) 00:08:55.410 fused_ordering(68) 00:08:55.410 fused_ordering(69) 00:08:55.410 fused_ordering(70) 00:08:55.410 fused_ordering(71) 00:08:55.410 fused_ordering(72) 00:08:55.410 fused_ordering(73) 00:08:55.410 fused_ordering(74) 00:08:55.410 fused_ordering(75) 00:08:55.410 fused_ordering(76) 00:08:55.410 fused_ordering(77) 00:08:55.410 fused_ordering(78) 00:08:55.410 fused_ordering(79) 00:08:55.410 fused_ordering(80) 00:08:55.410 fused_ordering(81) 00:08:55.410 fused_ordering(82) 00:08:55.410 fused_ordering(83) 00:08:55.410 fused_ordering(84) 00:08:55.410 fused_ordering(85) 00:08:55.410 fused_ordering(86) 00:08:55.410 fused_ordering(87) 00:08:55.410 fused_ordering(88) 00:08:55.410 fused_ordering(89) 00:08:55.410 fused_ordering(90) 00:08:55.410 fused_ordering(91) 00:08:55.410 fused_ordering(92) 00:08:55.410 fused_ordering(93) 00:08:55.410 fused_ordering(94) 00:08:55.410 fused_ordering(95) 00:08:55.410 fused_ordering(96) 00:08:55.410 fused_ordering(97) 00:08:55.410 fused_ordering(98) 00:08:55.410 fused_ordering(99) 00:08:55.410 fused_ordering(100) 00:08:55.410 fused_ordering(101) 00:08:55.410 fused_ordering(102) 00:08:55.410 fused_ordering(103) 00:08:55.410 fused_ordering(104) 00:08:55.410 fused_ordering(105) 00:08:55.410 fused_ordering(106) 00:08:55.410 fused_ordering(107) 00:08:55.410 fused_ordering(108) 00:08:55.410 fused_ordering(109) 00:08:55.410 fused_ordering(110) 00:08:55.410 fused_ordering(111) 00:08:55.410 fused_ordering(112) 00:08:55.410 fused_ordering(113) 00:08:55.410 fused_ordering(114) 00:08:55.410 fused_ordering(115) 00:08:55.410 fused_ordering(116) 00:08:55.410 fused_ordering(117) 00:08:55.410 fused_ordering(118) 00:08:55.410 fused_ordering(119) 00:08:55.410 fused_ordering(120) 00:08:55.410 fused_ordering(121) 00:08:55.410 fused_ordering(122) 00:08:55.410 fused_ordering(123) 00:08:55.410 fused_ordering(124) 00:08:55.410 fused_ordering(125) 00:08:55.410 fused_ordering(126) 00:08:55.410 fused_ordering(127) 00:08:55.410 fused_ordering(128) 00:08:55.410 fused_ordering(129) 00:08:55.410 fused_ordering(130) 00:08:55.410 fused_ordering(131) 00:08:55.410 fused_ordering(132) 00:08:55.410 fused_ordering(133) 00:08:55.410 fused_ordering(134) 00:08:55.410 fused_ordering(135) 00:08:55.410 fused_ordering(136) 00:08:55.410 fused_ordering(137) 00:08:55.410 fused_ordering(138) 00:08:55.410 fused_ordering(139) 00:08:55.410 fused_ordering(140) 00:08:55.410 fused_ordering(141) 00:08:55.410 fused_ordering(142) 00:08:55.410 fused_ordering(143) 00:08:55.410 fused_ordering(144) 00:08:55.410 fused_ordering(145) 00:08:55.410 fused_ordering(146) 00:08:55.410 fused_ordering(147) 00:08:55.410 fused_ordering(148) 00:08:55.410 fused_ordering(149) 00:08:55.410 fused_ordering(150) 00:08:55.410 fused_ordering(151) 00:08:55.410 fused_ordering(152) 00:08:55.410 fused_ordering(153) 00:08:55.410 fused_ordering(154) 00:08:55.410 fused_ordering(155) 00:08:55.410 fused_ordering(156) 00:08:55.410 fused_ordering(157) 00:08:55.410 fused_ordering(158) 00:08:55.410 fused_ordering(159) 00:08:55.410 fused_ordering(160) 00:08:55.410 fused_ordering(161) 00:08:55.410 fused_ordering(162) 00:08:55.410 fused_ordering(163) 00:08:55.410 fused_ordering(164) 00:08:55.410 fused_ordering(165) 00:08:55.410 fused_ordering(166) 00:08:55.410 fused_ordering(167) 00:08:55.410 fused_ordering(168) 00:08:55.410 fused_ordering(169) 00:08:55.410 fused_ordering(170) 00:08:55.410 fused_ordering(171) 00:08:55.410 fused_ordering(172) 00:08:55.410 fused_ordering(173) 00:08:55.410 fused_ordering(174) 00:08:55.410 fused_ordering(175) 00:08:55.410 fused_ordering(176) 00:08:55.410 fused_ordering(177) 00:08:55.410 fused_ordering(178) 00:08:55.410 fused_ordering(179) 00:08:55.410 fused_ordering(180) 00:08:55.410 fused_ordering(181) 00:08:55.410 fused_ordering(182) 00:08:55.410 fused_ordering(183) 00:08:55.410 fused_ordering(184) 00:08:55.410 fused_ordering(185) 00:08:55.410 fused_ordering(186) 00:08:55.410 fused_ordering(187) 00:08:55.410 fused_ordering(188) 00:08:55.410 fused_ordering(189) 00:08:55.410 fused_ordering(190) 00:08:55.410 fused_ordering(191) 00:08:55.410 fused_ordering(192) 00:08:55.410 fused_ordering(193) 00:08:55.410 fused_ordering(194) 00:08:55.410 fused_ordering(195) 00:08:55.410 fused_ordering(196) 00:08:55.410 fused_ordering(197) 00:08:55.410 fused_ordering(198) 00:08:55.410 fused_ordering(199) 00:08:55.410 fused_ordering(200) 00:08:55.410 fused_ordering(201) 00:08:55.410 fused_ordering(202) 00:08:55.410 fused_ordering(203) 00:08:55.410 fused_ordering(204) 00:08:55.410 fused_ordering(205) 00:08:55.669 fused_ordering(206) 00:08:55.669 fused_ordering(207) 00:08:55.669 fused_ordering(208) 00:08:55.669 fused_ordering(209) 00:08:55.669 fused_ordering(210) 00:08:55.669 fused_ordering(211) 00:08:55.669 fused_ordering(212) 00:08:55.669 fused_ordering(213) 00:08:55.669 fused_ordering(214) 00:08:55.669 fused_ordering(215) 00:08:55.669 fused_ordering(216) 00:08:55.669 fused_ordering(217) 00:08:55.669 fused_ordering(218) 00:08:55.669 fused_ordering(219) 00:08:55.669 fused_ordering(220) 00:08:55.669 fused_ordering(221) 00:08:55.669 fused_ordering(222) 00:08:55.669 fused_ordering(223) 00:08:55.669 fused_ordering(224) 00:08:55.669 fused_ordering(225) 00:08:55.669 fused_ordering(226) 00:08:55.669 fused_ordering(227) 00:08:55.669 fused_ordering(228) 00:08:55.669 fused_ordering(229) 00:08:55.669 fused_ordering(230) 00:08:55.669 fused_ordering(231) 00:08:55.669 fused_ordering(232) 00:08:55.669 fused_ordering(233) 00:08:55.669 fused_ordering(234) 00:08:55.669 fused_ordering(235) 00:08:55.669 fused_ordering(236) 00:08:55.669 fused_ordering(237) 00:08:55.669 fused_ordering(238) 00:08:55.669 fused_ordering(239) 00:08:55.669 fused_ordering(240) 00:08:55.669 fused_ordering(241) 00:08:55.669 fused_ordering(242) 00:08:55.669 fused_ordering(243) 00:08:55.669 fused_ordering(244) 00:08:55.669 fused_ordering(245) 00:08:55.669 fused_ordering(246) 00:08:55.669 fused_ordering(247) 00:08:55.669 fused_ordering(248) 00:08:55.669 fused_ordering(249) 00:08:55.669 fused_ordering(250) 00:08:55.669 fused_ordering(251) 00:08:55.669 fused_ordering(252) 00:08:55.669 fused_ordering(253) 00:08:55.669 fused_ordering(254) 00:08:55.669 fused_ordering(255) 00:08:55.669 fused_ordering(256) 00:08:55.669 fused_ordering(257) 00:08:55.669 fused_ordering(258) 00:08:55.669 fused_ordering(259) 00:08:55.669 fused_ordering(260) 00:08:55.669 fused_ordering(261) 00:08:55.669 fused_ordering(262) 00:08:55.669 fused_ordering(263) 00:08:55.669 fused_ordering(264) 00:08:55.669 fused_ordering(265) 00:08:55.669 fused_ordering(266) 00:08:55.669 fused_ordering(267) 00:08:55.669 fused_ordering(268) 00:08:55.669 fused_ordering(269) 00:08:55.669 fused_ordering(270) 00:08:55.669 fused_ordering(271) 00:08:55.669 fused_ordering(272) 00:08:55.669 fused_ordering(273) 00:08:55.669 fused_ordering(274) 00:08:55.669 fused_ordering(275) 00:08:55.669 fused_ordering(276) 00:08:55.669 fused_ordering(277) 00:08:55.669 fused_ordering(278) 00:08:55.669 fused_ordering(279) 00:08:55.669 fused_ordering(280) 00:08:55.669 fused_ordering(281) 00:08:55.669 fused_ordering(282) 00:08:55.669 fused_ordering(283) 00:08:55.669 fused_ordering(284) 00:08:55.669 fused_ordering(285) 00:08:55.669 fused_ordering(286) 00:08:55.669 fused_ordering(287) 00:08:55.669 fused_ordering(288) 00:08:55.669 fused_ordering(289) 00:08:55.669 fused_ordering(290) 00:08:55.669 fused_ordering(291) 00:08:55.669 fused_ordering(292) 00:08:55.669 fused_ordering(293) 00:08:55.669 fused_ordering(294) 00:08:55.669 fused_ordering(295) 00:08:55.669 fused_ordering(296) 00:08:55.669 fused_ordering(297) 00:08:55.669 fused_ordering(298) 00:08:55.669 fused_ordering(299) 00:08:55.669 fused_ordering(300) 00:08:55.669 fused_ordering(301) 00:08:55.669 fused_ordering(302) 00:08:55.669 fused_ordering(303) 00:08:55.669 fused_ordering(304) 00:08:55.669 fused_ordering(305) 00:08:55.669 fused_ordering(306) 00:08:55.669 fused_ordering(307) 00:08:55.669 fused_ordering(308) 00:08:55.669 fused_ordering(309) 00:08:55.669 fused_ordering(310) 00:08:55.669 fused_ordering(311) 00:08:55.669 fused_ordering(312) 00:08:55.669 fused_ordering(313) 00:08:55.669 fused_ordering(314) 00:08:55.669 fused_ordering(315) 00:08:55.669 fused_ordering(316) 00:08:55.669 fused_ordering(317) 00:08:55.669 fused_ordering(318) 00:08:55.669 fused_ordering(319) 00:08:55.669 fused_ordering(320) 00:08:55.669 fused_ordering(321) 00:08:55.669 fused_ordering(322) 00:08:55.669 fused_ordering(323) 00:08:55.669 fused_ordering(324) 00:08:55.669 fused_ordering(325) 00:08:55.669 fused_ordering(326) 00:08:55.669 fused_ordering(327) 00:08:55.669 fused_ordering(328) 00:08:55.669 fused_ordering(329) 00:08:55.669 fused_ordering(330) 00:08:55.669 fused_ordering(331) 00:08:55.669 fused_ordering(332) 00:08:55.669 fused_ordering(333) 00:08:55.669 fused_ordering(334) 00:08:55.669 fused_ordering(335) 00:08:55.669 fused_ordering(336) 00:08:55.669 fused_ordering(337) 00:08:55.669 fused_ordering(338) 00:08:55.669 fused_ordering(339) 00:08:55.669 fused_ordering(340) 00:08:55.669 fused_ordering(341) 00:08:55.669 fused_ordering(342) 00:08:55.669 fused_ordering(343) 00:08:55.669 fused_ordering(344) 00:08:55.669 fused_ordering(345) 00:08:55.669 fused_ordering(346) 00:08:55.669 fused_ordering(347) 00:08:55.669 fused_ordering(348) 00:08:55.669 fused_ordering(349) 00:08:55.669 fused_ordering(350) 00:08:55.669 fused_ordering(351) 00:08:55.669 fused_ordering(352) 00:08:55.669 fused_ordering(353) 00:08:55.669 fused_ordering(354) 00:08:55.669 fused_ordering(355) 00:08:55.669 fused_ordering(356) 00:08:55.669 fused_ordering(357) 00:08:55.669 fused_ordering(358) 00:08:55.669 fused_ordering(359) 00:08:55.669 fused_ordering(360) 00:08:55.669 fused_ordering(361) 00:08:55.669 fused_ordering(362) 00:08:55.669 fused_ordering(363) 00:08:55.669 fused_ordering(364) 00:08:55.669 fused_ordering(365) 00:08:55.669 fused_ordering(366) 00:08:55.669 fused_ordering(367) 00:08:55.669 fused_ordering(368) 00:08:55.669 fused_ordering(369) 00:08:55.669 fused_ordering(370) 00:08:55.669 fused_ordering(371) 00:08:55.669 fused_ordering(372) 00:08:55.669 fused_ordering(373) 00:08:55.669 fused_ordering(374) 00:08:55.669 fused_ordering(375) 00:08:55.669 fused_ordering(376) 00:08:55.669 fused_ordering(377) 00:08:55.669 fused_ordering(378) 00:08:55.669 fused_ordering(379) 00:08:55.669 fused_ordering(380) 00:08:55.669 fused_ordering(381) 00:08:55.669 fused_ordering(382) 00:08:55.669 fused_ordering(383) 00:08:55.669 fused_ordering(384) 00:08:55.669 fused_ordering(385) 00:08:55.669 fused_ordering(386) 00:08:55.669 fused_ordering(387) 00:08:55.669 fused_ordering(388) 00:08:55.669 fused_ordering(389) 00:08:55.669 fused_ordering(390) 00:08:55.669 fused_ordering(391) 00:08:55.669 fused_ordering(392) 00:08:55.669 fused_ordering(393) 00:08:55.669 fused_ordering(394) 00:08:55.669 fused_ordering(395) 00:08:55.669 fused_ordering(396) 00:08:55.669 fused_ordering(397) 00:08:55.669 fused_ordering(398) 00:08:55.669 fused_ordering(399) 00:08:55.669 fused_ordering(400) 00:08:55.669 fused_ordering(401) 00:08:55.669 fused_ordering(402) 00:08:55.669 fused_ordering(403) 00:08:55.669 fused_ordering(404) 00:08:55.669 fused_ordering(405) 00:08:55.669 fused_ordering(406) 00:08:55.669 fused_ordering(407) 00:08:55.669 fused_ordering(408) 00:08:55.669 fused_ordering(409) 00:08:55.669 fused_ordering(410) 00:08:55.927 fused_ordering(411) 00:08:55.927 fused_ordering(412) 00:08:55.927 fused_ordering(413) 00:08:55.927 fused_ordering(414) 00:08:55.927 fused_ordering(415) 00:08:55.927 fused_ordering(416) 00:08:55.927 fused_ordering(417) 00:08:55.927 fused_ordering(418) 00:08:55.927 fused_ordering(419) 00:08:55.927 fused_ordering(420) 00:08:55.927 fused_ordering(421) 00:08:55.927 fused_ordering(422) 00:08:55.927 fused_ordering(423) 00:08:55.927 fused_ordering(424) 00:08:55.927 fused_ordering(425) 00:08:55.927 fused_ordering(426) 00:08:55.927 fused_ordering(427) 00:08:55.927 fused_ordering(428) 00:08:55.927 fused_ordering(429) 00:08:55.927 fused_ordering(430) 00:08:55.927 fused_ordering(431) 00:08:55.927 fused_ordering(432) 00:08:55.927 fused_ordering(433) 00:08:55.927 fused_ordering(434) 00:08:55.927 fused_ordering(435) 00:08:55.927 fused_ordering(436) 00:08:55.927 fused_ordering(437) 00:08:55.927 fused_ordering(438) 00:08:55.928 fused_ordering(439) 00:08:55.928 fused_ordering(440) 00:08:55.928 fused_ordering(441) 00:08:55.928 fused_ordering(442) 00:08:55.928 fused_ordering(443) 00:08:55.928 fused_ordering(444) 00:08:55.928 fused_ordering(445) 00:08:55.928 fused_ordering(446) 00:08:55.928 fused_ordering(447) 00:08:55.928 fused_ordering(448) 00:08:55.928 fused_ordering(449) 00:08:55.928 fused_ordering(450) 00:08:55.928 fused_ordering(451) 00:08:55.928 fused_ordering(452) 00:08:55.928 fused_ordering(453) 00:08:55.928 fused_ordering(454) 00:08:55.928 fused_ordering(455) 00:08:55.928 fused_ordering(456) 00:08:55.928 fused_ordering(457) 00:08:55.928 fused_ordering(458) 00:08:55.928 fused_ordering(459) 00:08:55.928 fused_ordering(460) 00:08:55.928 fused_ordering(461) 00:08:55.928 fused_ordering(462) 00:08:55.928 fused_ordering(463) 00:08:55.928 fused_ordering(464) 00:08:55.928 fused_ordering(465) 00:08:55.928 fused_ordering(466) 00:08:55.928 fused_ordering(467) 00:08:55.928 fused_ordering(468) 00:08:55.928 fused_ordering(469) 00:08:55.928 fused_ordering(470) 00:08:55.928 fused_ordering(471) 00:08:55.928 fused_ordering(472) 00:08:55.928 fused_ordering(473) 00:08:55.928 fused_ordering(474) 00:08:55.928 fused_ordering(475) 00:08:55.928 fused_ordering(476) 00:08:55.928 fused_ordering(477) 00:08:55.928 fused_ordering(478) 00:08:55.928 fused_ordering(479) 00:08:55.928 fused_ordering(480) 00:08:55.928 fused_ordering(481) 00:08:55.928 fused_ordering(482) 00:08:55.928 fused_ordering(483) 00:08:55.928 fused_ordering(484) 00:08:55.928 fused_ordering(485) 00:08:55.928 fused_ordering(486) 00:08:55.928 fused_ordering(487) 00:08:55.928 fused_ordering(488) 00:08:55.928 fused_ordering(489) 00:08:55.928 fused_ordering(490) 00:08:55.928 fused_ordering(491) 00:08:55.928 fused_ordering(492) 00:08:55.928 fused_ordering(493) 00:08:55.928 fused_ordering(494) 00:08:55.928 fused_ordering(495) 00:08:55.928 fused_ordering(496) 00:08:55.928 fused_ordering(497) 00:08:55.928 fused_ordering(498) 00:08:55.928 fused_ordering(499) 00:08:55.928 fused_ordering(500) 00:08:55.928 fused_ordering(501) 00:08:55.928 fused_ordering(502) 00:08:55.928 fused_ordering(503) 00:08:55.928 fused_ordering(504) 00:08:55.928 fused_ordering(505) 00:08:55.928 fused_ordering(506) 00:08:55.928 fused_ordering(507) 00:08:55.928 fused_ordering(508) 00:08:55.928 fused_ordering(509) 00:08:55.928 fused_ordering(510) 00:08:55.928 fused_ordering(511) 00:08:55.928 fused_ordering(512) 00:08:55.928 fused_ordering(513) 00:08:55.928 fused_ordering(514) 00:08:55.928 fused_ordering(515) 00:08:55.928 fused_ordering(516) 00:08:55.928 fused_ordering(517) 00:08:55.928 fused_ordering(518) 00:08:55.928 fused_ordering(519) 00:08:55.928 fused_ordering(520) 00:08:55.928 fused_ordering(521) 00:08:55.928 fused_ordering(522) 00:08:55.928 fused_ordering(523) 00:08:55.928 fused_ordering(524) 00:08:55.928 fused_ordering(525) 00:08:55.928 fused_ordering(526) 00:08:55.928 fused_ordering(527) 00:08:55.928 fused_ordering(528) 00:08:55.928 fused_ordering(529) 00:08:55.928 fused_ordering(530) 00:08:55.928 fused_ordering(531) 00:08:55.928 fused_ordering(532) 00:08:55.928 fused_ordering(533) 00:08:55.928 fused_ordering(534) 00:08:55.928 fused_ordering(535) 00:08:55.928 fused_ordering(536) 00:08:55.928 fused_ordering(537) 00:08:55.928 fused_ordering(538) 00:08:55.928 fused_ordering(539) 00:08:55.928 fused_ordering(540) 00:08:55.928 fused_ordering(541) 00:08:55.928 fused_ordering(542) 00:08:55.928 fused_ordering(543) 00:08:55.928 fused_ordering(544) 00:08:55.928 fused_ordering(545) 00:08:55.928 fused_ordering(546) 00:08:55.928 fused_ordering(547) 00:08:55.928 fused_ordering(548) 00:08:55.928 fused_ordering(549) 00:08:55.928 fused_ordering(550) 00:08:55.928 fused_ordering(551) 00:08:55.928 fused_ordering(552) 00:08:55.928 fused_ordering(553) 00:08:55.928 fused_ordering(554) 00:08:55.928 fused_ordering(555) 00:08:55.928 fused_ordering(556) 00:08:55.928 fused_ordering(557) 00:08:55.928 fused_ordering(558) 00:08:55.928 fused_ordering(559) 00:08:55.928 fused_ordering(560) 00:08:55.928 fused_ordering(561) 00:08:55.928 fused_ordering(562) 00:08:55.928 fused_ordering(563) 00:08:55.928 fused_ordering(564) 00:08:55.928 fused_ordering(565) 00:08:55.928 fused_ordering(566) 00:08:55.928 fused_ordering(567) 00:08:55.928 fused_ordering(568) 00:08:55.928 fused_ordering(569) 00:08:55.928 fused_ordering(570) 00:08:55.928 fused_ordering(571) 00:08:55.928 fused_ordering(572) 00:08:55.928 fused_ordering(573) 00:08:55.928 fused_ordering(574) 00:08:55.928 fused_ordering(575) 00:08:55.928 fused_ordering(576) 00:08:55.928 fused_ordering(577) 00:08:55.928 fused_ordering(578) 00:08:55.928 fused_ordering(579) 00:08:55.928 fused_ordering(580) 00:08:55.928 fused_ordering(581) 00:08:55.928 fused_ordering(582) 00:08:55.928 fused_ordering(583) 00:08:55.928 fused_ordering(584) 00:08:55.928 fused_ordering(585) 00:08:55.928 fused_ordering(586) 00:08:55.928 fused_ordering(587) 00:08:55.928 fused_ordering(588) 00:08:55.928 fused_ordering(589) 00:08:55.928 fused_ordering(590) 00:08:55.928 fused_ordering(591) 00:08:55.928 fused_ordering(592) 00:08:55.928 fused_ordering(593) 00:08:55.928 fused_ordering(594) 00:08:55.928 fused_ordering(595) 00:08:55.928 fused_ordering(596) 00:08:55.928 fused_ordering(597) 00:08:55.928 fused_ordering(598) 00:08:55.928 fused_ordering(599) 00:08:55.928 fused_ordering(600) 00:08:55.928 fused_ordering(601) 00:08:55.928 fused_ordering(602) 00:08:55.928 fused_ordering(603) 00:08:55.928 fused_ordering(604) 00:08:55.928 fused_ordering(605) 00:08:55.928 fused_ordering(606) 00:08:55.928 fused_ordering(607) 00:08:55.928 fused_ordering(608) 00:08:55.928 fused_ordering(609) 00:08:55.928 fused_ordering(610) 00:08:55.928 fused_ordering(611) 00:08:55.928 fused_ordering(612) 00:08:55.928 fused_ordering(613) 00:08:55.928 fused_ordering(614) 00:08:55.928 fused_ordering(615) 00:08:56.495 fused_ordering(616) 00:08:56.495 fused_ordering(617) 00:08:56.495 fused_ordering(618) 00:08:56.495 fused_ordering(619) 00:08:56.495 fused_ordering(620) 00:08:56.495 fused_ordering(621) 00:08:56.495 fused_ordering(622) 00:08:56.495 fused_ordering(623) 00:08:56.495 fused_ordering(624) 00:08:56.495 fused_ordering(625) 00:08:56.495 fused_ordering(626) 00:08:56.495 fused_ordering(627) 00:08:56.495 fused_ordering(628) 00:08:56.495 fused_ordering(629) 00:08:56.495 fused_ordering(630) 00:08:56.495 fused_ordering(631) 00:08:56.495 fused_ordering(632) 00:08:56.495 fused_ordering(633) 00:08:56.495 fused_ordering(634) 00:08:56.496 fused_ordering(635) 00:08:56.496 fused_ordering(636) 00:08:56.496 fused_ordering(637) 00:08:56.496 fused_ordering(638) 00:08:56.496 fused_ordering(639) 00:08:56.496 fused_ordering(640) 00:08:56.496 fused_ordering(641) 00:08:56.496 fused_ordering(642) 00:08:56.496 fused_ordering(643) 00:08:56.496 fused_ordering(644) 00:08:56.496 fused_ordering(645) 00:08:56.496 fused_ordering(646) 00:08:56.496 fused_ordering(647) 00:08:56.496 fused_ordering(648) 00:08:56.496 fused_ordering(649) 00:08:56.496 fused_ordering(650) 00:08:56.496 fused_ordering(651) 00:08:56.496 fused_ordering(652) 00:08:56.496 fused_ordering(653) 00:08:56.496 fused_ordering(654) 00:08:56.496 fused_ordering(655) 00:08:56.496 fused_ordering(656) 00:08:56.496 fused_ordering(657) 00:08:56.496 fused_ordering(658) 00:08:56.496 fused_ordering(659) 00:08:56.496 fused_ordering(660) 00:08:56.496 fused_ordering(661) 00:08:56.496 fused_ordering(662) 00:08:56.496 fused_ordering(663) 00:08:56.496 fused_ordering(664) 00:08:56.496 fused_ordering(665) 00:08:56.496 fused_ordering(666) 00:08:56.496 fused_ordering(667) 00:08:56.496 fused_ordering(668) 00:08:56.496 fused_ordering(669) 00:08:56.496 fused_ordering(670) 00:08:56.496 fused_ordering(671) 00:08:56.496 fused_ordering(672) 00:08:56.496 fused_ordering(673) 00:08:56.496 fused_ordering(674) 00:08:56.496 fused_ordering(675) 00:08:56.496 fused_ordering(676) 00:08:56.496 fused_ordering(677) 00:08:56.496 fused_ordering(678) 00:08:56.496 fused_ordering(679) 00:08:56.496 fused_ordering(680) 00:08:56.496 fused_ordering(681) 00:08:56.496 fused_ordering(682) 00:08:56.496 fused_ordering(683) 00:08:56.496 fused_ordering(684) 00:08:56.496 fused_ordering(685) 00:08:56.496 fused_ordering(686) 00:08:56.496 fused_ordering(687) 00:08:56.496 fused_ordering(688) 00:08:56.496 fused_ordering(689) 00:08:56.496 fused_ordering(690) 00:08:56.496 fused_ordering(691) 00:08:56.496 fused_ordering(692) 00:08:56.496 fused_ordering(693) 00:08:56.496 fused_ordering(694) 00:08:56.496 fused_ordering(695) 00:08:56.496 fused_ordering(696) 00:08:56.496 fused_ordering(697) 00:08:56.496 fused_ordering(698) 00:08:56.496 fused_ordering(699) 00:08:56.496 fused_ordering(700) 00:08:56.496 fused_ordering(701) 00:08:56.496 fused_ordering(702) 00:08:56.496 fused_ordering(703) 00:08:56.496 fused_ordering(704) 00:08:56.496 fused_ordering(705) 00:08:56.496 fused_ordering(706) 00:08:56.496 fused_ordering(707) 00:08:56.496 fused_ordering(708) 00:08:56.496 fused_ordering(709) 00:08:56.496 fused_ordering(710) 00:08:56.496 fused_ordering(711) 00:08:56.496 fused_ordering(712) 00:08:56.496 fused_ordering(713) 00:08:56.496 fused_ordering(714) 00:08:56.496 fused_ordering(715) 00:08:56.496 fused_ordering(716) 00:08:56.496 fused_ordering(717) 00:08:56.496 fused_ordering(718) 00:08:56.496 fused_ordering(719) 00:08:56.496 fused_ordering(720) 00:08:56.496 fused_ordering(721) 00:08:56.496 fused_ordering(722) 00:08:56.496 fused_ordering(723) 00:08:56.496 fused_ordering(724) 00:08:56.496 fused_ordering(725) 00:08:56.496 fused_ordering(726) 00:08:56.496 fused_ordering(727) 00:08:56.496 fused_ordering(728) 00:08:56.496 fused_ordering(729) 00:08:56.496 fused_ordering(730) 00:08:56.496 fused_ordering(731) 00:08:56.496 fused_ordering(732) 00:08:56.496 fused_ordering(733) 00:08:56.496 fused_ordering(734) 00:08:56.496 fused_ordering(735) 00:08:56.496 fused_ordering(736) 00:08:56.496 fused_ordering(737) 00:08:56.496 fused_ordering(738) 00:08:56.496 fused_ordering(739) 00:08:56.496 fused_ordering(740) 00:08:56.496 fused_ordering(741) 00:08:56.496 fused_ordering(742) 00:08:56.496 fused_ordering(743) 00:08:56.496 fused_ordering(744) 00:08:56.496 fused_ordering(745) 00:08:56.496 fused_ordering(746) 00:08:56.496 fused_ordering(747) 00:08:56.496 fused_ordering(748) 00:08:56.496 fused_ordering(749) 00:08:56.496 fused_ordering(750) 00:08:56.496 fused_ordering(751) 00:08:56.496 fused_ordering(752) 00:08:56.496 fused_ordering(753) 00:08:56.496 fused_ordering(754) 00:08:56.496 fused_ordering(755) 00:08:56.496 fused_ordering(756) 00:08:56.496 fused_ordering(757) 00:08:56.496 fused_ordering(758) 00:08:56.496 fused_ordering(759) 00:08:56.496 fused_ordering(760) 00:08:56.496 fused_ordering(761) 00:08:56.496 fused_ordering(762) 00:08:56.496 fused_ordering(763) 00:08:56.496 fused_ordering(764) 00:08:56.496 fused_ordering(765) 00:08:56.496 fused_ordering(766) 00:08:56.496 fused_ordering(767) 00:08:56.496 fused_ordering(768) 00:08:56.496 fused_ordering(769) 00:08:56.496 fused_ordering(770) 00:08:56.496 fused_ordering(771) 00:08:56.496 fused_ordering(772) 00:08:56.496 fused_ordering(773) 00:08:56.496 fused_ordering(774) 00:08:56.496 fused_ordering(775) 00:08:56.496 fused_ordering(776) 00:08:56.496 fused_ordering(777) 00:08:56.496 fused_ordering(778) 00:08:56.496 fused_ordering(779) 00:08:56.496 fused_ordering(780) 00:08:56.496 fused_ordering(781) 00:08:56.496 fused_ordering(782) 00:08:56.496 fused_ordering(783) 00:08:56.496 fused_ordering(784) 00:08:56.496 fused_ordering(785) 00:08:56.496 fused_ordering(786) 00:08:56.496 fused_ordering(787) 00:08:56.496 fused_ordering(788) 00:08:56.496 fused_ordering(789) 00:08:56.496 fused_ordering(790) 00:08:56.496 fused_ordering(791) 00:08:56.496 fused_ordering(792) 00:08:56.496 fused_ordering(793) 00:08:56.496 fused_ordering(794) 00:08:56.496 fused_ordering(795) 00:08:56.496 fused_ordering(796) 00:08:56.496 fused_ordering(797) 00:08:56.496 fused_ordering(798) 00:08:56.496 fused_ordering(799) 00:08:56.496 fused_ordering(800) 00:08:56.496 fused_ordering(801) 00:08:56.496 fused_ordering(802) 00:08:56.496 fused_ordering(803) 00:08:56.496 fused_ordering(804) 00:08:56.496 fused_ordering(805) 00:08:56.496 fused_ordering(806) 00:08:56.496 fused_ordering(807) 00:08:56.496 fused_ordering(808) 00:08:56.496 fused_ordering(809) 00:08:56.496 fused_ordering(810) 00:08:56.496 fused_ordering(811) 00:08:56.496 fused_ordering(812) 00:08:56.496 fused_ordering(813) 00:08:56.496 fused_ordering(814) 00:08:56.496 fused_ordering(815) 00:08:56.496 fused_ordering(816) 00:08:56.496 fused_ordering(817) 00:08:56.496 fused_ordering(818) 00:08:56.496 fused_ordering(819) 00:08:56.496 fused_ordering(820) 00:08:57.061 fused_ordering(821) 00:08:57.061 fused_ordering(822) 00:08:57.061 fused_ordering(823) 00:08:57.061 fused_ordering(824) 00:08:57.061 fused_ordering(825) 00:08:57.061 fused_ordering(826) 00:08:57.061 fused_ordering(827) 00:08:57.061 fused_ordering(828) 00:08:57.061 fused_ordering(829) 00:08:57.061 fused_ordering(830) 00:08:57.061 fused_ordering(831) 00:08:57.061 fused_ordering(832) 00:08:57.061 fused_ordering(833) 00:08:57.061 fused_ordering(834) 00:08:57.061 fused_ordering(835) 00:08:57.061 fused_ordering(836) 00:08:57.061 fused_ordering(837) 00:08:57.061 fused_ordering(838) 00:08:57.061 fused_ordering(839) 00:08:57.061 fused_ordering(840) 00:08:57.061 fused_ordering(841) 00:08:57.061 fused_ordering(842) 00:08:57.061 fused_ordering(843) 00:08:57.061 fused_ordering(844) 00:08:57.061 fused_ordering(845) 00:08:57.061 fused_ordering(846) 00:08:57.061 fused_ordering(847) 00:08:57.061 fused_ordering(848) 00:08:57.061 fused_ordering(849) 00:08:57.061 fused_ordering(850) 00:08:57.061 fused_ordering(851) 00:08:57.061 fused_ordering(852) 00:08:57.061 fused_ordering(853) 00:08:57.061 fused_ordering(854) 00:08:57.061 fused_ordering(855) 00:08:57.061 fused_ordering(856) 00:08:57.061 fused_ordering(857) 00:08:57.061 fused_ordering(858) 00:08:57.061 fused_ordering(859) 00:08:57.061 fused_ordering(860) 00:08:57.061 fused_ordering(861) 00:08:57.061 fused_ordering(862) 00:08:57.061 fused_ordering(863) 00:08:57.061 fused_ordering(864) 00:08:57.061 fused_ordering(865) 00:08:57.061 fused_ordering(866) 00:08:57.061 fused_ordering(867) 00:08:57.061 fused_ordering(868) 00:08:57.061 fused_ordering(869) 00:08:57.061 fused_ordering(870) 00:08:57.061 fused_ordering(871) 00:08:57.061 fused_ordering(872) 00:08:57.061 fused_ordering(873) 00:08:57.061 fused_ordering(874) 00:08:57.061 fused_ordering(875) 00:08:57.061 fused_ordering(876) 00:08:57.061 fused_ordering(877) 00:08:57.061 fused_ordering(878) 00:08:57.061 fused_ordering(879) 00:08:57.061 fused_ordering(880) 00:08:57.061 fused_ordering(881) 00:08:57.061 fused_ordering(882) 00:08:57.061 fused_ordering(883) 00:08:57.061 fused_ordering(884) 00:08:57.061 fused_ordering(885) 00:08:57.061 fused_ordering(886) 00:08:57.061 fused_ordering(887) 00:08:57.061 fused_ordering(888) 00:08:57.061 fused_ordering(889) 00:08:57.061 fused_ordering(890) 00:08:57.061 fused_ordering(891) 00:08:57.061 fused_ordering(892) 00:08:57.061 fused_ordering(893) 00:08:57.061 fused_ordering(894) 00:08:57.061 fused_ordering(895) 00:08:57.061 fused_ordering(896) 00:08:57.061 fused_ordering(897) 00:08:57.061 fused_ordering(898) 00:08:57.061 fused_ordering(899) 00:08:57.061 fused_ordering(900) 00:08:57.061 fused_ordering(901) 00:08:57.061 fused_ordering(902) 00:08:57.061 fused_ordering(903) 00:08:57.061 fused_ordering(904) 00:08:57.061 fused_ordering(905) 00:08:57.061 fused_ordering(906) 00:08:57.061 fused_ordering(907) 00:08:57.061 fused_ordering(908) 00:08:57.061 fused_ordering(909) 00:08:57.061 fused_ordering(910) 00:08:57.061 fused_ordering(911) 00:08:57.061 fused_ordering(912) 00:08:57.061 fused_ordering(913) 00:08:57.061 fused_ordering(914) 00:08:57.061 fused_ordering(915) 00:08:57.061 fused_ordering(916) 00:08:57.061 fused_ordering(917) 00:08:57.061 fused_ordering(918) 00:08:57.061 fused_ordering(919) 00:08:57.061 fused_ordering(920) 00:08:57.061 fused_ordering(921) 00:08:57.061 fused_ordering(922) 00:08:57.061 fused_ordering(923) 00:08:57.061 fused_ordering(924) 00:08:57.061 fused_ordering(925) 00:08:57.061 fused_ordering(926) 00:08:57.061 fused_ordering(927) 00:08:57.061 fused_ordering(928) 00:08:57.061 fused_ordering(929) 00:08:57.061 fused_ordering(930) 00:08:57.061 fused_ordering(931) 00:08:57.061 fused_ordering(932) 00:08:57.061 fused_ordering(933) 00:08:57.061 fused_ordering(934) 00:08:57.061 fused_ordering(935) 00:08:57.061 fused_ordering(936) 00:08:57.061 fused_ordering(937) 00:08:57.061 fused_ordering(938) 00:08:57.061 fused_ordering(939) 00:08:57.061 fused_ordering(940) 00:08:57.061 fused_ordering(941) 00:08:57.061 fused_ordering(942) 00:08:57.061 fused_ordering(943) 00:08:57.061 fused_ordering(944) 00:08:57.061 fused_ordering(945) 00:08:57.061 fused_ordering(946) 00:08:57.061 fused_ordering(947) 00:08:57.061 fused_ordering(948) 00:08:57.061 fused_ordering(949) 00:08:57.061 fused_ordering(950) 00:08:57.061 fused_ordering(951) 00:08:57.061 fused_ordering(952) 00:08:57.061 fused_ordering(953) 00:08:57.061 fused_ordering(954) 00:08:57.061 fused_ordering(955) 00:08:57.061 fused_ordering(956) 00:08:57.061 fused_ordering(957) 00:08:57.061 fused_ordering(958) 00:08:57.061 fused_ordering(959) 00:08:57.061 fused_ordering(960) 00:08:57.061 fused_ordering(961) 00:08:57.061 fused_ordering(962) 00:08:57.061 fused_ordering(963) 00:08:57.061 fused_ordering(964) 00:08:57.061 fused_ordering(965) 00:08:57.061 fused_ordering(966) 00:08:57.061 fused_ordering(967) 00:08:57.061 fused_ordering(968) 00:08:57.061 fused_ordering(969) 00:08:57.061 fused_ordering(970) 00:08:57.061 fused_ordering(971) 00:08:57.061 fused_ordering(972) 00:08:57.061 fused_ordering(973) 00:08:57.061 fused_ordering(974) 00:08:57.061 fused_ordering(975) 00:08:57.061 fused_ordering(976) 00:08:57.061 fused_ordering(977) 00:08:57.061 fused_ordering(978) 00:08:57.061 fused_ordering(979) 00:08:57.061 fused_ordering(980) 00:08:57.061 fused_ordering(981) 00:08:57.061 fused_ordering(982) 00:08:57.061 fused_ordering(983) 00:08:57.061 fused_ordering(984) 00:08:57.061 fused_ordering(985) 00:08:57.061 fused_ordering(986) 00:08:57.061 fused_ordering(987) 00:08:57.061 fused_ordering(988) 00:08:57.061 fused_ordering(989) 00:08:57.061 fused_ordering(990) 00:08:57.061 fused_ordering(991) 00:08:57.061 fused_ordering(992) 00:08:57.061 fused_ordering(993) 00:08:57.061 fused_ordering(994) 00:08:57.061 fused_ordering(995) 00:08:57.061 fused_ordering(996) 00:08:57.061 fused_ordering(997) 00:08:57.061 fused_ordering(998) 00:08:57.061 fused_ordering(999) 00:08:57.061 fused_ordering(1000) 00:08:57.061 fused_ordering(1001) 00:08:57.061 fused_ordering(1002) 00:08:57.061 fused_ordering(1003) 00:08:57.061 fused_ordering(1004) 00:08:57.061 fused_ordering(1005) 00:08:57.061 fused_ordering(1006) 00:08:57.061 fused_ordering(1007) 00:08:57.061 fused_ordering(1008) 00:08:57.061 fused_ordering(1009) 00:08:57.061 fused_ordering(1010) 00:08:57.061 fused_ordering(1011) 00:08:57.061 fused_ordering(1012) 00:08:57.061 fused_ordering(1013) 00:08:57.061 fused_ordering(1014) 00:08:57.061 fused_ordering(1015) 00:08:57.061 fused_ordering(1016) 00:08:57.061 fused_ordering(1017) 00:08:57.061 fused_ordering(1018) 00:08:57.061 fused_ordering(1019) 00:08:57.061 fused_ordering(1020) 00:08:57.061 fused_ordering(1021) 00:08:57.061 fused_ordering(1022) 00:08:57.061 fused_ordering(1023) 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.061 rmmod nvme_tcp 00:08:57.061 rmmod nvme_fabrics 00:08:57.061 rmmod nvme_keyring 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 148148 ']' 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 148148 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 148148 ']' 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 148148 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 148148 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 148148' 00:08:57.061 killing process with pid 148148 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 148148 00:08:57.061 [2024-05-16 20:07:44.144440] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:57.061 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 148148 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.320 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.855 20:07:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.855 00:08:59.855 real 0m7.126s 00:08:59.855 user 0m4.982s 00:08:59.855 sys 0m2.688s 00:08:59.855 20:07:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.855 20:07:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:59.855 ************************************ 00:08:59.855 END TEST nvmf_fused_ordering 00:08:59.855 ************************************ 00:08:59.855 20:07:46 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:59.855 20:07:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:59.855 20:07:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.855 20:07:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.855 ************************************ 00:08:59.855 START TEST nvmf_delete_subsystem 00:08:59.855 ************************************ 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:59.855 * Looking for test storage... 00:08:59.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.855 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.856 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:01.764 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:01.764 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.764 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:01.765 Found net devices under 0000:09:00.0: cvl_0_0 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:01.765 Found net devices under 0000:09:00.1: cvl_0_1 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:09:01.765 00:09:01.765 --- 10.0.0.2 ping statistics --- 00:09:01.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.765 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:09:01.765 00:09:01.765 --- 10.0.0.1 ping statistics --- 00:09:01.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.765 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=150363 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 150363 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 150363 ']' 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:01.765 20:07:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.765 [2024-05-16 20:07:48.710517] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:01.765 [2024-05-16 20:07:48.710596] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.765 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.765 [2024-05-16 20:07:48.781574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.765 [2024-05-16 20:07:48.903694] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.765 [2024-05-16 20:07:48.903765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.765 [2024-05-16 20:07:48.903783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.765 [2024-05-16 20:07:48.903797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.765 [2024-05-16 20:07:48.903808] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.765 [2024-05-16 20:07:48.903917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.765 [2024-05-16 20:07:48.903923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 [2024-05-16 20:07:49.716453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 [2024-05-16 20:07:49.732518] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:02.702 [2024-05-16 20:07:49.732789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 NULL1 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 Delay0 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=150518 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:02.702 20:07:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:02.702 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.702 [2024-05-16 20:07:49.807463] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:05.232 20:07:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.232 20:07:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.232 20:07:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 [2024-05-16 20:07:52.018863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24107a0 is same with the state(5) to be set 00:09:05.232 Read completed with error (sct=0, sc=8) 00:09:05.232 Write completed with error (sct=0, sc=8) 00:09:05.232 starting I/O failed: -6 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 starting I/O failed: -6 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 [2024-05-16 20:07:52.019632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f550c00c600 is same with the state(5) to be set 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Write completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:05.233 Read completed with error (sct=0, sc=8) 00:09:06.168 [2024-05-16 20:07:52.984417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411ab0 is same with the state(5) to be set 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 [2024-05-16 20:07:53.018253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f550c00c2f0 is same with the state(5) to be set 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 [2024-05-16 20:07:53.022573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410980 is same with the state(5) to be set 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Write completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.168 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 [2024-05-16 20:07:53.022801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24103e0 is same with the state(5) to be set 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 Write completed with error (sct=0, sc=8) 00:09:06.169 Read completed with error (sct=0, sc=8) 00:09:06.169 [2024-05-16 20:07:53.023057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24105c0 is same with the state(5) to be set 00:09:06.169 Initializing NVMe Controllers 00:09:06.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.169 Controller IO queue size 128, less than required. 00:09:06.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:06.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:06.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:06.169 Initialization complete. Launching workers. 00:09:06.169 ======================================================== 00:09:06.169 Latency(us) 00:09:06.169 Device Information : IOPS MiB/s Average min max 00:09:06.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.56 0.09 952413.27 842.30 1013463.19 00:09:06.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.83 0.07 892290.91 403.64 1013652.43 00:09:06.169 ======================================================== 00:09:06.169 Total : 338.39 0.17 925437.55 403.64 1013652.43 00:09:06.169 00:09:06.169 [2024-05-16 20:07:53.024008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2411ab0 (9): Bad file descriptor 00:09:06.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:06.169 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.169 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:06.169 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 150518 00:09:06.169 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 150518 00:09:06.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (150518) - No such process 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 150518 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 150518 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 150518 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.427 [2024-05-16 20:07:53.547258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.427 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=151041 00:09:06.428 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:06.428 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:06.428 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:06.428 20:07:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.684 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.684 [2024-05-16 20:07:53.610703] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:06.942 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.942 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:06.942 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:07.508 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:07.508 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:07.508 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.072 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.072 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:08.072 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.636 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.636 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:08.637 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.202 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.202 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:09.202 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.459 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.459 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:09.459 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.717 Initializing NVMe Controllers 00:09:09.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.717 Controller IO queue size 128, less than required. 00:09:09.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:09.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:09.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:09.718 Initialization complete. Launching workers. 00:09:09.718 ======================================================== 00:09:09.718 Latency(us) 00:09:09.718 Device Information : IOPS MiB/s Average min max 00:09:09.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003673.01 1000190.37 1011805.15 00:09:09.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005149.00 1000272.94 1011456.53 00:09:09.718 ======================================================== 00:09:09.718 Total : 256.00 0.12 1004411.01 1000190.37 1011805.15 00:09:09.718 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 151041 00:09:09.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (151041) - No such process 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 151041 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.976 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.976 rmmod nvme_tcp 00:09:09.976 rmmod nvme_fabrics 00:09:09.976 rmmod nvme_keyring 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 150363 ']' 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 150363 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 150363 ']' 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 150363 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150363 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150363' 00:09:10.234 killing process with pid 150363 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 150363 00:09:10.234 [2024-05-16 20:07:57.167402] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:10.234 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 150363 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.494 20:07:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.399 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.399 00:09:12.399 real 0m12.977s 00:09:12.399 user 0m29.370s 00:09:12.399 sys 0m2.996s 00:09:12.399 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.399 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.399 ************************************ 00:09:12.399 END TEST nvmf_delete_subsystem 00:09:12.399 ************************************ 00:09:12.399 20:07:59 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:12.399 20:07:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:12.399 20:07:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.399 20:07:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.399 ************************************ 00:09:12.399 START TEST nvmf_ns_masking 00:09:12.399 ************************************ 00:09:12.399 20:07:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:12.657 * Looking for test storage... 00:09:12.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=9e2f163b-0274-4bae-b218-c844a7e920c0 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.657 20:07:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.558 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:14.559 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:14.559 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:14.559 Found net devices under 0000:09:00.0: cvl_0_0 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:14.559 Found net devices under 0000:09:00.1: cvl_0_1 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:09:14.559 00:09:14.559 --- 10.0.0.2 ping statistics --- 00:09:14.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.559 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:09:14.559 00:09:14.559 --- 10.0.0.1 ping statistics --- 00:09:14.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.559 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=153382 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 153382 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 153382 ']' 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:14.559 20:08:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:14.559 [2024-05-16 20:08:01.699993] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:14.559 [2024-05-16 20:08:01.700066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.817 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.817 [2024-05-16 20:08:01.767783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.817 [2024-05-16 20:08:01.885651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.817 [2024-05-16 20:08:01.885715] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.817 [2024-05-16 20:08:01.885731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.817 [2024-05-16 20:08:01.885745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.817 [2024-05-16 20:08:01.885757] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.817 [2024-05-16 20:08:01.885842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.817 [2024-05-16 20:08:01.885922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.817 [2024-05-16 20:08:01.885952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.817 [2024-05-16 20:08:01.885955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.750 20:08:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.007 [2024-05-16 20:08:02.963492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.007 20:08:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:16.007 20:08:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:16.007 20:08:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:16.265 Malloc1 00:09:16.265 20:08:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:16.526 Malloc2 00:09:16.526 20:08:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.783 20:08:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:17.040 20:08:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.298 [2024-05-16 20:08:04.235118] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:17.298 [2024-05-16 20:08:04.235460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e2f163b-0274-4bae-b218-c844a7e920c0 -a 10.0.0.2 -s 4420 -i 4 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:17.298 20:08:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:19.823 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:19.824 [ 0]:0x1 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=557c24061ee148a2b87f26033c7b7d0a 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 557c24061ee148a2b87f26033c7b7d0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:19.824 [ 0]:0x1 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=557c24061ee148a2b87f26033c7b7d0a 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 557c24061ee148a2b87f26033c7b7d0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:19.824 [ 1]:0x2 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.824 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.081 20:08:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:20.339 20:08:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:20.339 20:08:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e2f163b-0274-4bae-b218-c844a7e920c0 -a 10.0.0.2 -s 4420 -i 4 00:09:20.597 20:08:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:20.597 20:08:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:20.597 20:08:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.597 20:08:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:09:20.597 20:08:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:09:20.597 20:08:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:22.495 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:22.753 [ 0]:0x2 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.753 20:08:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.018 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:23.018 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.018 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:23.018 [ 0]:0x1 00:09:23.018 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.018 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=557c24061ee148a2b87f26033c7b7d0a 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 557c24061ee148a2b87f26033c7b7d0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:23.282 [ 1]:0x2 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.282 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:23.540 [ 0]:0x2 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.540 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.798 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:23.798 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e2f163b-0274-4bae-b218-c844a7e920c0 -a 10.0.0.2 -s 4420 -i 4 00:09:24.056 20:08:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:24.056 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:24.056 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.056 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:24.056 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:24.056 20:08:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:25.955 20:08:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:25.956 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:26.214 [ 0]:0x1 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=557c24061ee148a2b87f26033c7b7d0a 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 557c24061ee148a2b87f26033c7b7d0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:26.214 [ 1]:0x2 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.214 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:26.472 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.473 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:26.735 [ 0]:0x2 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.735 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:26.995 [2024-05-16 20:08:13.914767] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:26.995 request: 00:09:26.995 { 00:09:26.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.995 "nsid": 2, 00:09:26.995 "host": "nqn.2016-06.io.spdk:host1", 00:09:26.995 "method": "nvmf_ns_remove_host", 00:09:26.995 "req_id": 1 00:09:26.995 } 00:09:26.995 Got JSON-RPC error response 00:09:26.995 response: 00:09:26.995 { 00:09:26.995 "code": -32602, 00:09:26.995 "message": "Invalid parameters" 00:09:26.995 } 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:26.995 20:08:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:26.995 [ 0]:0x2 00:09:26.995 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.995 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:26.995 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c90254d1f4f5459e90ddaeb90a50b9e7 00:09:26.995 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c90254d1f4f5459e90ddaeb90a50b9e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.995 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:26.995 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.254 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.512 rmmod nvme_tcp 00:09:27.512 rmmod nvme_fabrics 00:09:27.512 rmmod nvme_keyring 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 153382 ']' 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 153382 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 153382 ']' 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 153382 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153382 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153382' 00:09:27.512 killing process with pid 153382 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 153382 00:09:27.512 [2024-05-16 20:08:14.534964] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:27.512 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 153382 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.771 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.305 20:08:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.305 00:09:30.305 real 0m17.375s 00:09:30.305 user 0m55.169s 00:09:30.305 sys 0m3.714s 00:09:30.305 20:08:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:30.305 20:08:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:30.305 ************************************ 00:09:30.305 END TEST nvmf_ns_masking 00:09:30.305 ************************************ 00:09:30.305 20:08:16 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:30.305 20:08:16 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:30.305 20:08:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:30.305 20:08:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:30.305 20:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.305 ************************************ 00:09:30.305 START TEST nvmf_nvme_cli 00:09:30.305 ************************************ 00:09:30.305 20:08:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:30.305 * Looking for test storage... 00:09:30.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.305 20:08:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:32.235 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:32.235 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:32.235 Found net devices under 0000:09:00.0: cvl_0_0 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:32.235 Found net devices under 0000:09:00.1: cvl_0_1 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:09:32.235 00:09:32.235 --- 10.0.0.2 ping statistics --- 00:09:32.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.235 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:32.235 00:09:32.235 --- 10.0.0.1 ping statistics --- 00:09:32.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.235 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.235 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=156949 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 156949 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 156949 ']' 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:32.236 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.236 [2024-05-16 20:08:19.227303] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:32.236 [2024-05-16 20:08:19.227392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.236 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.236 [2024-05-16 20:08:19.292905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.495 [2024-05-16 20:08:19.406024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.495 [2024-05-16 20:08:19.406080] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.495 [2024-05-16 20:08:19.406110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.495 [2024-05-16 20:08:19.406121] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.495 [2024-05-16 20:08:19.406131] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.495 [2024-05-16 20:08:19.406260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.495 [2024-05-16 20:08:19.406329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.495 [2024-05-16 20:08:19.406360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.495 [2024-05-16 20:08:19.406362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 [2024-05-16 20:08:19.560444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 Malloc0 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 Malloc1 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.495 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 [2024-05-16 20:08:19.644421] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:32.753 [2024-05-16 20:08:19.644702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:32.753 00:09:32.753 Discovery Log Number of Records 2, Generation counter 2 00:09:32.753 =====Discovery Log Entry 0====== 00:09:32.753 trtype: tcp 00:09:32.753 adrfam: ipv4 00:09:32.753 subtype: current discovery subsystem 00:09:32.753 treq: not required 00:09:32.753 portid: 0 00:09:32.753 trsvcid: 4420 00:09:32.753 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:32.753 traddr: 10.0.0.2 00:09:32.753 eflags: explicit discovery connections, duplicate discovery information 00:09:32.753 sectype: none 00:09:32.753 =====Discovery Log Entry 1====== 00:09:32.753 trtype: tcp 00:09:32.753 adrfam: ipv4 00:09:32.753 subtype: nvme subsystem 00:09:32.753 treq: not required 00:09:32.753 portid: 0 00:09:32.753 trsvcid: 4420 00:09:32.753 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:32.753 traddr: 10.0.0.2 00:09:32.753 eflags: none 00:09:32.753 sectype: none 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:32.753 20:08:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.319 20:08:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:33.319 20:08:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:09:33.319 20:08:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.319 20:08:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:33.319 20:08:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:33.319 20:08:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:35.849 /dev/nvme0n1 ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.849 rmmod nvme_tcp 00:09:35.849 rmmod nvme_fabrics 00:09:35.849 rmmod nvme_keyring 00:09:35.849 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.107 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:36.107 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:36.107 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 156949 ']' 00:09:36.107 20:08:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 156949 00:09:36.108 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 156949 ']' 00:09:36.108 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 156949 00:09:36.108 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:09:36.108 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:36.108 20:08:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 156949 00:09:36.108 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:36.108 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:36.108 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 156949' 00:09:36.108 killing process with pid 156949 00:09:36.108 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 156949 00:09:36.108 [2024-05-16 20:08:23.023970] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:36.108 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 156949 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.367 20:08:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.272 20:08:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.272 00:09:38.272 real 0m8.417s 00:09:38.272 user 0m15.913s 00:09:38.272 sys 0m2.161s 00:09:38.272 20:08:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.272 20:08:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:38.272 ************************************ 00:09:38.272 END TEST nvmf_nvme_cli 00:09:38.272 ************************************ 00:09:38.272 20:08:25 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:38.272 20:08:25 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:38.272 20:08:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:38.272 20:08:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.272 20:08:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.531 ************************************ 00:09:38.531 START TEST nvmf_vfio_user 00:09:38.531 ************************************ 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:38.531 * Looking for test storage... 00:09:38.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.531 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=157873 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 157873' 00:09:38.532 Process pid: 157873 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 157873 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 157873 ']' 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.532 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:38.532 [2024-05-16 20:08:25.560257] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:38.532 [2024-05-16 20:08:25.560347] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.532 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.532 [2024-05-16 20:08:25.627122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.790 [2024-05-16 20:08:25.735715] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.790 [2024-05-16 20:08:25.735768] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.790 [2024-05-16 20:08:25.735796] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.790 [2024-05-16 20:08:25.735808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.790 [2024-05-16 20:08:25.735818] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.790 [2024-05-16 20:08:25.735904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.790 [2024-05-16 20:08:25.735935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.790 [2024-05-16 20:08:25.735961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.790 [2024-05-16 20:08:25.735963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.790 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.790 20:08:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:09:38.790 20:08:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:39.723 20:08:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:39.981 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:39.981 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:39.981 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:39.981 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:39.981 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:40.239 Malloc1 00:09:40.239 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:40.497 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:40.755 20:08:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:41.013 [2024-05-16 20:08:28.078361] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:41.013 20:08:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:41.013 20:08:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:41.013 20:08:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:41.271 Malloc2 00:09:41.271 20:08:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:41.529 20:08:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:41.787 20:08:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:42.046 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:42.046 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:42.046 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:42.046 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:42.046 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:42.046 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:42.046 [2024-05-16 20:08:29.105787] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:42.046 [2024-05-16 20:08:29.105830] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158298 ] 00:09:42.046 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.046 [2024-05-16 20:08:29.141191] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:42.046 [2024-05-16 20:08:29.150026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:42.046 [2024-05-16 20:08:29.150054] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4f8958a000 00:09:42.046 [2024-05-16 20:08:29.151017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.152017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.153019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.154026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.155030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.156034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.157042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.158046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:42.046 [2024-05-16 20:08:29.159055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:42.046 [2024-05-16 20:08:29.159079] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4f8957f000 00:09:42.046 [2024-05-16 20:08:29.160211] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:42.046 [2024-05-16 20:08:29.176530] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:42.046 [2024-05-16 20:08:29.176569] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:42.046 [2024-05-16 20:08:29.181211] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:42.046 [2024-05-16 20:08:29.181433] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:42.046 [2024-05-16 20:08:29.181531] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:42.046 [2024-05-16 20:08:29.181564] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:42.046 [2024-05-16 20:08:29.181576] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:42.047 [2024-05-16 20:08:29.182202] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:42.047 [2024-05-16 20:08:29.182222] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:42.047 [2024-05-16 20:08:29.182234] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:42.047 [2024-05-16 20:08:29.183199] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:42.047 [2024-05-16 20:08:29.183232] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:42.047 [2024-05-16 20:08:29.183247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:42.047 [2024-05-16 20:08:29.184210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:42.047 [2024-05-16 20:08:29.184228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:42.047 [2024-05-16 20:08:29.185216] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:42.047 [2024-05-16 20:08:29.185236] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:42.047 [2024-05-16 20:08:29.185245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:42.047 [2024-05-16 20:08:29.185256] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:42.047 [2024-05-16 20:08:29.185370] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:42.047 [2024-05-16 20:08:29.185379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:42.047 [2024-05-16 20:08:29.185388] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:42.047 [2024-05-16 20:08:29.186223] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:42.047 [2024-05-16 20:08:29.187231] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:42.047 [2024-05-16 20:08:29.188239] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:42.047 [2024-05-16 20:08:29.189230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:42.047 [2024-05-16 20:08:29.189327] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:42.047 [2024-05-16 20:08:29.190244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:42.047 [2024-05-16 20:08:29.190263] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:42.047 [2024-05-16 20:08:29.190272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190296] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:42.047 [2024-05-16 20:08:29.190310] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190342] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:42.047 [2024-05-16 20:08:29.190353] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:42.047 [2024-05-16 20:08:29.190376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.190451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.190473] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:42.047 [2024-05-16 20:08:29.190482] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:42.047 [2024-05-16 20:08:29.190505] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:42.047 [2024-05-16 20:08:29.190512] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:42.047 [2024-05-16 20:08:29.190525] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:42.047 [2024-05-16 20:08:29.190534] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:42.047 [2024-05-16 20:08:29.190542] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190557] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.190593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.190612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.047 [2024-05-16 20:08:29.190627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.047 [2024-05-16 20:08:29.190649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.047 [2024-05-16 20:08:29.190666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.047 [2024-05-16 20:08:29.190676] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190699] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.190729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.190741] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:42.047 [2024-05-16 20:08:29.190750] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190773] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.190798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.190867] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190889] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190904] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:42.047 [2024-05-16 20:08:29.190913] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:42.047 [2024-05-16 20:08:29.190923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.190940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.190967] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:42.047 [2024-05-16 20:08:29.190984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.190999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.191012] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:42.047 [2024-05-16 20:08:29.191024] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:42.047 [2024-05-16 20:08:29.191034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.191063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.191087] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.191103] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.191115] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:42.047 [2024-05-16 20:08:29.191124] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:42.047 [2024-05-16 20:08:29.191134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:42.047 [2024-05-16 20:08:29.191150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:42.047 [2024-05-16 20:08:29.191166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.191178] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.191194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:42.047 [2024-05-16 20:08:29.191204] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:42.306 [2024-05-16 20:08:29.191213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:42.306 [2024-05-16 20:08:29.191222] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:42.307 [2024-05-16 20:08:29.191231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:42.307 [2024-05-16 20:08:29.191240] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:42.307 [2024-05-16 20:08:29.191272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191443] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:42.307 [2024-05-16 20:08:29.191455] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:42.307 [2024-05-16 20:08:29.191461] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:42.307 [2024-05-16 20:08:29.191468] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:42.307 [2024-05-16 20:08:29.191478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:42.307 [2024-05-16 20:08:29.191490] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:42.307 [2024-05-16 20:08:29.191498] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:42.307 [2024-05-16 20:08:29.191507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191518] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:42.307 [2024-05-16 20:08:29.191526] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:42.307 [2024-05-16 20:08:29.191535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191547] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:42.307 [2024-05-16 20:08:29.191556] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:42.307 [2024-05-16 20:08:29.191565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:42.307 [2024-05-16 20:08:29.191576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:42.307 [2024-05-16 20:08:29.191633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:42.307 ===================================================== 00:09:42.307 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:42.307 ===================================================== 00:09:42.307 Controller Capabilities/Features 00:09:42.307 ================================ 00:09:42.307 Vendor ID: 4e58 00:09:42.307 Subsystem Vendor ID: 4e58 00:09:42.307 Serial Number: SPDK1 00:09:42.307 Model Number: SPDK bdev Controller 00:09:42.307 Firmware Version: 24.09 00:09:42.307 Recommended Arb Burst: 6 00:09:42.307 IEEE OUI Identifier: 8d 6b 50 00:09:42.307 Multi-path I/O 00:09:42.307 May have multiple subsystem ports: Yes 00:09:42.307 May have multiple controllers: Yes 00:09:42.307 Associated with SR-IOV VF: No 00:09:42.307 Max Data Transfer Size: 131072 00:09:42.307 Max Number of Namespaces: 32 00:09:42.307 Max Number of I/O Queues: 127 00:09:42.307 NVMe Specification Version (VS): 1.3 00:09:42.307 NVMe Specification Version (Identify): 1.3 00:09:42.307 Maximum Queue Entries: 256 00:09:42.307 Contiguous Queues Required: Yes 00:09:42.307 Arbitration Mechanisms Supported 00:09:42.307 Weighted Round Robin: Not Supported 00:09:42.307 Vendor Specific: Not Supported 00:09:42.307 Reset Timeout: 15000 ms 00:09:42.307 Doorbell Stride: 4 bytes 00:09:42.307 NVM Subsystem Reset: Not Supported 00:09:42.307 Command Sets Supported 00:09:42.307 NVM Command Set: Supported 00:09:42.307 Boot Partition: Not Supported 00:09:42.307 Memory Page Size Minimum: 4096 bytes 00:09:42.307 Memory Page Size Maximum: 4096 bytes 00:09:42.307 Persistent Memory Region: Not Supported 00:09:42.307 Optional Asynchronous Events Supported 00:09:42.307 Namespace Attribute Notices: Supported 00:09:42.307 Firmware Activation Notices: Not Supported 00:09:42.307 ANA Change Notices: Not Supported 00:09:42.307 PLE Aggregate Log Change Notices: Not Supported 00:09:42.307 LBA Status Info Alert Notices: Not Supported 00:09:42.307 EGE Aggregate Log Change Notices: Not Supported 00:09:42.307 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.307 Zone Descriptor Change Notices: Not Supported 00:09:42.307 Discovery Log Change Notices: Not Supported 00:09:42.307 Controller Attributes 00:09:42.307 128-bit Host Identifier: Supported 00:09:42.307 Non-Operational Permissive Mode: Not Supported 00:09:42.307 NVM Sets: Not Supported 00:09:42.307 Read Recovery Levels: Not Supported 00:09:42.307 Endurance Groups: Not Supported 00:09:42.307 Predictable Latency Mode: Not Supported 00:09:42.307 Traffic Based Keep ALive: Not Supported 00:09:42.307 Namespace Granularity: Not Supported 00:09:42.307 SQ Associations: Not Supported 00:09:42.307 UUID List: Not Supported 00:09:42.307 Multi-Domain Subsystem: Not Supported 00:09:42.307 Fixed Capacity Management: Not Supported 00:09:42.307 Variable Capacity Management: Not Supported 00:09:42.307 Delete Endurance Group: Not Supported 00:09:42.307 Delete NVM Set: Not Supported 00:09:42.307 Extended LBA Formats Supported: Not Supported 00:09:42.307 Flexible Data Placement Supported: Not Supported 00:09:42.307 00:09:42.307 Controller Memory Buffer Support 00:09:42.307 ================================ 00:09:42.307 Supported: No 00:09:42.307 00:09:42.307 Persistent Memory Region Support 00:09:42.307 ================================ 00:09:42.307 Supported: No 00:09:42.307 00:09:42.307 Admin Command Set Attributes 00:09:42.307 ============================ 00:09:42.307 Security Send/Receive: Not Supported 00:09:42.307 Format NVM: Not Supported 00:09:42.307 Firmware Activate/Download: Not Supported 00:09:42.307 Namespace Management: Not Supported 00:09:42.307 Device Self-Test: Not Supported 00:09:42.307 Directives: Not Supported 00:09:42.307 NVMe-MI: Not Supported 00:09:42.307 Virtualization Management: Not Supported 00:09:42.307 Doorbell Buffer Config: Not Supported 00:09:42.307 Get LBA Status Capability: Not Supported 00:09:42.307 Command & Feature Lockdown Capability: Not Supported 00:09:42.307 Abort Command Limit: 4 00:09:42.307 Async Event Request Limit: 4 00:09:42.307 Number of Firmware Slots: N/A 00:09:42.307 Firmware Slot 1 Read-Only: N/A 00:09:42.307 Firmware Activation Without Reset: N/A 00:09:42.307 Multiple Update Detection Support: N/A 00:09:42.307 Firmware Update Granularity: No Information Provided 00:09:42.307 Per-Namespace SMART Log: No 00:09:42.307 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.307 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:42.307 Command Effects Log Page: Supported 00:09:42.307 Get Log Page Extended Data: Supported 00:09:42.307 Telemetry Log Pages: Not Supported 00:09:42.307 Persistent Event Log Pages: Not Supported 00:09:42.307 Supported Log Pages Log Page: May Support 00:09:42.307 Commands Supported & Effects Log Page: Not Supported 00:09:42.307 Feature Identifiers & Effects Log Page:May Support 00:09:42.307 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.307 Data Area 4 for Telemetry Log: Not Supported 00:09:42.307 Error Log Page Entries Supported: 128 00:09:42.307 Keep Alive: Supported 00:09:42.307 Keep Alive Granularity: 10000 ms 00:09:42.307 00:09:42.307 NVM Command Set Attributes 00:09:42.307 ========================== 00:09:42.307 Submission Queue Entry Size 00:09:42.307 Max: 64 00:09:42.307 Min: 64 00:09:42.307 Completion Queue Entry Size 00:09:42.307 Max: 16 00:09:42.307 Min: 16 00:09:42.307 Number of Namespaces: 32 00:09:42.307 Compare Command: Supported 00:09:42.307 Write Uncorrectable Command: Not Supported 00:09:42.307 Dataset Management Command: Supported 00:09:42.307 Write Zeroes Command: Supported 00:09:42.307 Set Features Save Field: Not Supported 00:09:42.307 Reservations: Not Supported 00:09:42.307 Timestamp: Not Supported 00:09:42.307 Copy: Supported 00:09:42.307 Volatile Write Cache: Present 00:09:42.307 Atomic Write Unit (Normal): 1 00:09:42.307 Atomic Write Unit (PFail): 1 00:09:42.307 Atomic Compare & Write Unit: 1 00:09:42.307 Fused Compare & Write: Supported 00:09:42.307 Scatter-Gather List 00:09:42.307 SGL Command Set: Supported (Dword aligned) 00:09:42.307 SGL Keyed: Not Supported 00:09:42.307 SGL Bit Bucket Descriptor: Not Supported 00:09:42.307 SGL Metadata Pointer: Not Supported 00:09:42.307 Oversized SGL: Not Supported 00:09:42.308 SGL Metadata Address: Not Supported 00:09:42.308 SGL Offset: Not Supported 00:09:42.308 Transport SGL Data Block: Not Supported 00:09:42.308 Replay Protected Memory Block: Not Supported 00:09:42.308 00:09:42.308 Firmware Slot Information 00:09:42.308 ========================= 00:09:42.308 Active slot: 1 00:09:42.308 Slot 1 Firmware Revision: 24.09 00:09:42.308 00:09:42.308 00:09:42.308 Commands Supported and Effects 00:09:42.308 ============================== 00:09:42.308 Admin Commands 00:09:42.308 -------------- 00:09:42.308 Get Log Page (02h): Supported 00:09:42.308 Identify (06h): Supported 00:09:42.308 Abort (08h): Supported 00:09:42.308 Set Features (09h): Supported 00:09:42.308 Get Features (0Ah): Supported 00:09:42.308 Asynchronous Event Request (0Ch): Supported 00:09:42.308 Keep Alive (18h): Supported 00:09:42.308 I/O Commands 00:09:42.308 ------------ 00:09:42.308 Flush (00h): Supported LBA-Change 00:09:42.308 Write (01h): Supported LBA-Change 00:09:42.308 Read (02h): Supported 00:09:42.308 Compare (05h): Supported 00:09:42.308 Write Zeroes (08h): Supported LBA-Change 00:09:42.308 Dataset Management (09h): Supported LBA-Change 00:09:42.308 Copy (19h): Supported LBA-Change 00:09:42.308 Unknown (79h): Supported LBA-Change 00:09:42.308 Unknown (7Ah): Supported 00:09:42.308 00:09:42.308 Error Log 00:09:42.308 ========= 00:09:42.308 00:09:42.308 Arbitration 00:09:42.308 =========== 00:09:42.308 Arbitration Burst: 1 00:09:42.308 00:09:42.308 Power Management 00:09:42.308 ================ 00:09:42.308 Number of Power States: 1 00:09:42.308 Current Power State: Power State #0 00:09:42.308 Power State #0: 00:09:42.308 Max Power: 0.00 W 00:09:42.308 Non-Operational State: Operational 00:09:42.308 Entry Latency: Not Reported 00:09:42.308 Exit Latency: Not Reported 00:09:42.308 Relative Read Throughput: 0 00:09:42.308 Relative Read Latency: 0 00:09:42.308 Relative Write Throughput: 0 00:09:42.308 Relative Write Latency: 0 00:09:42.308 Idle Power: Not Reported 00:09:42.308 Active Power: Not Reported 00:09:42.308 Non-Operational Permissive Mode: Not Supported 00:09:42.308 00:09:42.308 Health Information 00:09:42.308 ================== 00:09:42.308 Critical Warnings: 00:09:42.308 Available Spare Space: OK 00:09:42.308 Temperature: OK 00:09:42.308 Device Reliability: OK 00:09:42.308 Read Only: No 00:09:42.308 Volatile Memory Backup: OK 00:09:42.308 Current Temperature: 0 Kelvin (-2[2024-05-16 20:08:29.191756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:42.308 [2024-05-16 20:08:29.191773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:42.308 [2024-05-16 20:08:29.191812] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:42.308 [2024-05-16 20:08:29.191839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.308 [2024-05-16 20:08:29.191865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.308 [2024-05-16 20:08:29.191877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.308 [2024-05-16 20:08:29.191889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.308 [2024-05-16 20:08:29.195868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:42.308 [2024-05-16 20:08:29.195893] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:42.308 [2024-05-16 20:08:29.196280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:42.308 [2024-05-16 20:08:29.196356] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:42.308 [2024-05-16 20:08:29.196371] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:42.308 [2024-05-16 20:08:29.197289] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:42.308 [2024-05-16 20:08:29.197313] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:42.308 [2024-05-16 20:08:29.197372] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:42.308 [2024-05-16 20:08:29.199326] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:42.308 73 Celsius) 00:09:42.308 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:42.308 Available Spare: 0% 00:09:42.308 Available Spare Threshold: 0% 00:09:42.308 Life Percentage Used: 0% 00:09:42.308 Data Units Read: 0 00:09:42.308 Data Units Written: 0 00:09:42.308 Host Read Commands: 0 00:09:42.308 Host Write Commands: 0 00:09:42.308 Controller Busy Time: 0 minutes 00:09:42.308 Power Cycles: 0 00:09:42.308 Power On Hours: 0 hours 00:09:42.308 Unsafe Shutdowns: 0 00:09:42.308 Unrecoverable Media Errors: 0 00:09:42.308 Lifetime Error Log Entries: 0 00:09:42.308 Warning Temperature Time: 0 minutes 00:09:42.308 Critical Temperature Time: 0 minutes 00:09:42.308 00:09:42.308 Number of Queues 00:09:42.308 ================ 00:09:42.308 Number of I/O Submission Queues: 127 00:09:42.308 Number of I/O Completion Queues: 127 00:09:42.308 00:09:42.308 Active Namespaces 00:09:42.308 ================= 00:09:42.308 Namespace ID:1 00:09:42.308 Error Recovery Timeout: Unlimited 00:09:42.308 Command Set Identifier: NVM (00h) 00:09:42.308 Deallocate: Supported 00:09:42.308 Deallocated/Unwritten Error: Not Supported 00:09:42.308 Deallocated Read Value: Unknown 00:09:42.308 Deallocate in Write Zeroes: Not Supported 00:09:42.308 Deallocated Guard Field: 0xFFFF 00:09:42.308 Flush: Supported 00:09:42.308 Reservation: Supported 00:09:42.308 Namespace Sharing Capabilities: Multiple Controllers 00:09:42.308 Size (in LBAs): 131072 (0GiB) 00:09:42.308 Capacity (in LBAs): 131072 (0GiB) 00:09:42.308 Utilization (in LBAs): 131072 (0GiB) 00:09:42.308 NGUID: 0792A8681B35471DBF6555CDE6792FD9 00:09:42.308 UUID: 0792a868-1b35-471d-bf65-55cde6792fd9 00:09:42.308 Thin Provisioning: Not Supported 00:09:42.308 Per-NS Atomic Units: Yes 00:09:42.308 Atomic Boundary Size (Normal): 0 00:09:42.308 Atomic Boundary Size (PFail): 0 00:09:42.308 Atomic Boundary Offset: 0 00:09:42.308 Maximum Single Source Range Length: 65535 00:09:42.308 Maximum Copy Length: 65535 00:09:42.308 Maximum Source Range Count: 1 00:09:42.308 NGUID/EUI64 Never Reused: No 00:09:42.308 Namespace Write Protected: No 00:09:42.308 Number of LBA Formats: 1 00:09:42.308 Current LBA Format: LBA Format #00 00:09:42.308 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.308 00:09:42.308 20:08:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:42.308 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.308 [2024-05-16 20:08:29.430716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:47.574 Initializing NVMe Controllers 00:09:47.574 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:47.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:47.574 Initialization complete. Launching workers. 00:09:47.574 ======================================================== 00:09:47.574 Latency(us) 00:09:47.574 Device Information : IOPS MiB/s Average min max 00:09:47.574 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34131.20 133.32 3751.10 1171.62 8250.34 00:09:47.574 ======================================================== 00:09:47.574 Total : 34131.20 133.32 3751.10 1171.62 8250.34 00:09:47.574 00:09:47.574 [2024-05-16 20:08:34.454128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:47.574 20:08:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:47.574 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.574 [2024-05-16 20:08:34.701386] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.838 Initializing NVMe Controllers 00:09:52.838 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:52.838 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:52.838 Initialization complete. Launching workers. 00:09:52.838 ======================================================== 00:09:52.838 Latency(us) 00:09:52.838 Device Information : IOPS MiB/s Average min max 00:09:52.838 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.33 62.72 7976.63 6769.18 8975.67 00:09:52.838 ======================================================== 00:09:52.838 Total : 16057.33 62.72 7976.63 6769.18 8975.67 00:09:52.838 00:09:52.838 [2024-05-16 20:08:39.741240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.838 20:08:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:52.838 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.838 [2024-05-16 20:08:39.946277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.116 [2024-05-16 20:08:45.011181] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:58.116 Initializing NVMe Controllers 00:09:58.116 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:58.116 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:58.116 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:58.116 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:58.116 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:58.116 Initialization complete. Launching workers. 00:09:58.116 Starting thread on core 2 00:09:58.116 Starting thread on core 3 00:09:58.116 Starting thread on core 1 00:09:58.116 20:08:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:58.116 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.375 [2024-05-16 20:08:45.321339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.659 [2024-05-16 20:08:48.561236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.659 Initializing NVMe Controllers 00:10:01.659 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.659 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.659 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:01.659 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:01.659 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:01.659 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:01.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:01.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:01.659 Initialization complete. Launching workers. 00:10:01.659 Starting thread on core 1 with urgent priority queue 00:10:01.659 Starting thread on core 2 with urgent priority queue 00:10:01.659 Starting thread on core 3 with urgent priority queue 00:10:01.659 Starting thread on core 0 with urgent priority queue 00:10:01.659 SPDK bdev Controller (SPDK1 ) core 0: 3442.67 IO/s 29.05 secs/100000 ios 00:10:01.659 SPDK bdev Controller (SPDK1 ) core 1: 3772.33 IO/s 26.51 secs/100000 ios 00:10:01.659 SPDK bdev Controller (SPDK1 ) core 2: 3119.67 IO/s 32.05 secs/100000 ios 00:10:01.659 SPDK bdev Controller (SPDK1 ) core 3: 3291.00 IO/s 30.39 secs/100000 ios 00:10:01.659 ======================================================== 00:10:01.659 00:10:01.659 20:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:01.659 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.917 [2024-05-16 20:08:48.865424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.917 Initializing NVMe Controllers 00:10:01.917 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.917 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.917 Namespace ID: 1 size: 0GB 00:10:01.917 Initialization complete. 00:10:01.917 INFO: using host memory buffer for IO 00:10:01.917 Hello world! 00:10:01.917 [2024-05-16 20:08:48.898998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.917 20:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:01.917 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.175 [2024-05-16 20:08:49.192304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.109 Initializing NVMe Controllers 00:10:03.109 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:03.109 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:03.109 Initialization complete. Launching workers. 00:10:03.109 submit (in ns) avg, min, max = 8061.7, 3480.0, 4016642.2 00:10:03.109 complete (in ns) avg, min, max = 24749.4, 2072.2, 4014756.7 00:10:03.109 00:10:03.109 Submit histogram 00:10:03.109 ================ 00:10:03.109 Range in us Cumulative Count 00:10:03.109 3.461 - 3.484: 0.0074% ( 1) 00:10:03.109 3.484 - 3.508: 0.0297% ( 3) 00:10:03.109 3.508 - 3.532: 1.1152% ( 146) 00:10:03.109 3.532 - 3.556: 2.9368% ( 245) 00:10:03.109 3.556 - 3.579: 8.9814% ( 813) 00:10:03.109 3.579 - 3.603: 15.2491% ( 843) 00:10:03.109 3.603 - 3.627: 23.9777% ( 1174) 00:10:03.109 3.627 - 3.650: 31.2714% ( 981) 00:10:03.109 3.650 - 3.674: 39.3383% ( 1085) 00:10:03.109 3.674 - 3.698: 46.3941% ( 949) 00:10:03.109 3.698 - 3.721: 53.4498% ( 949) 00:10:03.109 3.721 - 3.745: 57.4870% ( 543) 00:10:03.109 3.745 - 3.769: 61.4572% ( 534) 00:10:03.109 3.769 - 3.793: 65.0260% ( 480) 00:10:03.109 3.793 - 3.816: 68.4981% ( 467) 00:10:03.109 3.816 - 3.840: 72.4089% ( 526) 00:10:03.109 3.840 - 3.864: 76.5651% ( 559) 00:10:03.109 3.864 - 3.887: 80.0892% ( 474) 00:10:03.109 3.887 - 3.911: 83.1822% ( 416) 00:10:03.109 3.911 - 3.935: 85.9926% ( 378) 00:10:03.109 3.935 - 3.959: 87.9703% ( 266) 00:10:03.109 3.959 - 3.982: 89.6580% ( 227) 00:10:03.109 3.982 - 4.006: 91.1599% ( 202) 00:10:03.109 4.006 - 4.030: 92.0223% ( 116) 00:10:03.109 4.030 - 4.053: 93.0335% ( 136) 00:10:03.109 4.053 - 4.077: 93.9480% ( 123) 00:10:03.109 4.077 - 4.101: 94.6914% ( 100) 00:10:03.110 4.101 - 4.124: 95.2342% ( 73) 00:10:03.110 4.124 - 4.148: 95.8290% ( 80) 00:10:03.110 4.148 - 4.172: 96.1190% ( 39) 00:10:03.110 4.172 - 4.196: 96.3420% ( 30) 00:10:03.110 4.196 - 4.219: 96.5204% ( 24) 00:10:03.110 4.219 - 4.243: 96.6989% ( 24) 00:10:03.110 4.243 - 4.267: 96.7435% ( 6) 00:10:03.110 4.267 - 4.290: 96.8699% ( 17) 00:10:03.110 4.290 - 4.314: 96.9368% ( 9) 00:10:03.110 4.314 - 4.338: 97.0335% ( 13) 00:10:03.110 4.338 - 4.361: 97.1078% ( 10) 00:10:03.110 4.361 - 4.385: 97.1822% ( 10) 00:10:03.110 4.385 - 4.409: 97.2565% ( 10) 00:10:03.110 4.409 - 4.433: 97.2937% ( 5) 00:10:03.110 4.433 - 4.456: 97.3309% ( 5) 00:10:03.110 4.456 - 4.480: 97.3457% ( 2) 00:10:03.110 4.480 - 4.504: 97.3606% ( 2) 00:10:03.110 4.504 - 4.527: 97.4052% ( 6) 00:10:03.110 4.527 - 4.551: 97.4275% ( 3) 00:10:03.110 4.551 - 4.575: 97.4572% ( 4) 00:10:03.110 4.575 - 4.599: 97.4796% ( 3) 00:10:03.110 4.599 - 4.622: 97.5093% ( 4) 00:10:03.110 4.622 - 4.646: 97.5167% ( 1) 00:10:03.110 4.646 - 4.670: 97.5465% ( 4) 00:10:03.110 4.670 - 4.693: 97.5539% ( 1) 00:10:03.110 4.693 - 4.717: 97.5985% ( 6) 00:10:03.110 4.717 - 4.741: 97.6283% ( 4) 00:10:03.110 4.741 - 4.764: 97.6654% ( 5) 00:10:03.110 4.764 - 4.788: 97.6877% ( 3) 00:10:03.110 4.788 - 4.812: 97.7323% ( 6) 00:10:03.110 4.812 - 4.836: 97.7770% ( 6) 00:10:03.110 4.836 - 4.859: 97.8067% ( 4) 00:10:03.110 4.859 - 4.883: 97.8364% ( 4) 00:10:03.110 4.883 - 4.907: 97.8810% ( 6) 00:10:03.110 4.907 - 4.930: 97.9182% ( 5) 00:10:03.110 4.930 - 4.954: 97.9777% ( 8) 00:10:03.110 4.954 - 4.978: 98.0297% ( 7) 00:10:03.110 4.978 - 5.001: 98.0892% ( 8) 00:10:03.110 5.001 - 5.025: 98.1190% ( 4) 00:10:03.110 5.025 - 5.049: 98.1487% ( 4) 00:10:03.110 5.049 - 5.073: 98.1933% ( 6) 00:10:03.110 5.073 - 5.096: 98.2156% ( 3) 00:10:03.110 5.096 - 5.120: 98.2751% ( 8) 00:10:03.110 5.120 - 5.144: 98.3271% ( 7) 00:10:03.110 5.144 - 5.167: 98.3420% ( 2) 00:10:03.110 5.167 - 5.191: 98.3494% ( 1) 00:10:03.110 5.191 - 5.215: 98.3569% ( 1) 00:10:03.110 5.215 - 5.239: 98.3643% ( 1) 00:10:03.110 5.262 - 5.286: 98.3792% ( 2) 00:10:03.110 5.286 - 5.310: 98.3866% ( 1) 00:10:03.110 5.310 - 5.333: 98.3941% ( 1) 00:10:03.110 5.357 - 5.381: 98.4015% ( 1) 00:10:03.110 5.381 - 5.404: 98.4089% ( 1) 00:10:03.110 5.404 - 5.428: 98.4164% ( 1) 00:10:03.110 5.452 - 5.476: 98.4312% ( 2) 00:10:03.110 5.476 - 5.499: 98.4387% ( 1) 00:10:03.110 5.523 - 5.547: 98.4461% ( 1) 00:10:03.110 5.570 - 5.594: 98.4535% ( 1) 00:10:03.110 5.594 - 5.618: 98.4610% ( 1) 00:10:03.110 5.665 - 5.689: 98.4684% ( 1) 00:10:03.110 5.736 - 5.760: 98.4758% ( 1) 00:10:03.110 5.831 - 5.855: 98.4833% ( 1) 00:10:03.110 5.997 - 6.021: 98.4907% ( 1) 00:10:03.110 6.163 - 6.210: 98.4981% ( 1) 00:10:03.110 6.305 - 6.353: 98.5130% ( 2) 00:10:03.110 6.353 - 6.400: 98.5279% ( 2) 00:10:03.110 6.447 - 6.495: 98.5353% ( 1) 00:10:03.110 6.590 - 6.637: 98.5428% ( 1) 00:10:03.110 6.779 - 6.827: 98.5502% ( 1) 00:10:03.110 6.874 - 6.921: 98.5576% ( 1) 00:10:03.110 7.016 - 7.064: 98.5651% ( 1) 00:10:03.110 7.111 - 7.159: 98.5725% ( 1) 00:10:03.110 7.253 - 7.301: 98.5874% ( 2) 00:10:03.110 7.301 - 7.348: 98.5948% ( 1) 00:10:03.110 7.348 - 7.396: 98.6097% ( 2) 00:10:03.110 7.443 - 7.490: 98.6245% ( 2) 00:10:03.110 7.538 - 7.585: 98.6320% ( 1) 00:10:03.110 7.633 - 7.680: 98.6468% ( 2) 00:10:03.110 7.775 - 7.822: 98.6766% ( 4) 00:10:03.110 7.822 - 7.870: 98.6840% ( 1) 00:10:03.110 7.917 - 7.964: 98.6989% ( 2) 00:10:03.110 7.964 - 8.012: 98.7063% ( 1) 00:10:03.110 8.059 - 8.107: 98.7435% ( 5) 00:10:03.110 8.107 - 8.154: 98.7509% ( 1) 00:10:03.110 8.296 - 8.344: 98.7584% ( 1) 00:10:03.110 8.533 - 8.581: 98.7732% ( 2) 00:10:03.110 8.581 - 8.628: 98.7807% ( 1) 00:10:03.110 8.628 - 8.676: 98.7881% ( 1) 00:10:03.110 8.818 - 8.865: 98.7955% ( 1) 00:10:03.110 8.913 - 8.960: 98.8104% ( 2) 00:10:03.110 9.055 - 9.102: 98.8178% ( 1) 00:10:03.110 9.102 - 9.150: 98.8327% ( 2) 00:10:03.110 9.150 - 9.197: 98.8401% ( 1) 00:10:03.110 9.197 - 9.244: 98.8476% ( 1) 00:10:03.110 9.481 - 9.529: 98.8550% ( 1) 00:10:03.110 9.908 - 9.956: 98.8699% ( 2) 00:10:03.110 10.003 - 10.050: 98.8848% ( 2) 00:10:03.110 10.050 - 10.098: 98.8996% ( 2) 00:10:03.110 10.098 - 10.145: 98.9145% ( 2) 00:10:03.110 10.714 - 10.761: 98.9219% ( 1) 00:10:03.110 10.809 - 10.856: 98.9294% ( 1) 00:10:03.110 11.046 - 11.093: 98.9368% ( 1) 00:10:03.110 11.093 - 11.141: 98.9442% ( 1) 00:10:03.110 11.283 - 11.330: 98.9517% ( 1) 00:10:03.110 11.330 - 11.378: 98.9591% ( 1) 00:10:03.110 11.378 - 11.425: 98.9665% ( 1) 00:10:03.110 11.425 - 11.473: 98.9740% ( 1) 00:10:03.110 11.757 - 11.804: 98.9888% ( 2) 00:10:03.110 11.804 - 11.852: 99.0112% ( 3) 00:10:03.110 11.852 - 11.899: 99.0186% ( 1) 00:10:03.110 12.041 - 12.089: 99.0260% ( 1) 00:10:03.110 12.231 - 12.326: 99.0335% ( 1) 00:10:03.110 12.516 - 12.610: 99.0409% ( 1) 00:10:03.110 12.610 - 12.705: 99.0483% ( 1) 00:10:03.110 12.705 - 12.800: 99.0558% ( 1) 00:10:03.110 12.800 - 12.895: 99.0632% ( 1) 00:10:03.110 12.895 - 12.990: 99.0706% ( 1) 00:10:03.110 13.084 - 13.179: 99.0781% ( 1) 00:10:03.110 13.179 - 13.274: 99.0855% ( 1) 00:10:03.110 13.274 - 13.369: 99.0929% ( 1) 00:10:03.110 13.369 - 13.464: 99.1004% ( 1) 00:10:03.110 14.127 - 14.222: 99.1078% ( 1) 00:10:03.110 14.507 - 14.601: 99.1227% ( 2) 00:10:03.110 14.696 - 14.791: 99.1375% ( 2) 00:10:03.110 14.791 - 14.886: 99.1450% ( 1) 00:10:03.110 15.076 - 15.170: 99.1524% ( 1) 00:10:03.110 15.644 - 15.739: 99.1599% ( 1) 00:10:03.110 17.067 - 17.161: 99.1747% ( 2) 00:10:03.110 17.351 - 17.446: 99.1896% ( 2) 00:10:03.110 17.446 - 17.541: 99.1970% ( 1) 00:10:03.110 17.541 - 17.636: 99.2193% ( 3) 00:10:03.110 17.636 - 17.730: 99.2416% ( 3) 00:10:03.110 17.730 - 17.825: 99.3086% ( 9) 00:10:03.110 17.825 - 17.920: 99.3309% ( 3) 00:10:03.110 17.920 - 18.015: 99.3903% ( 8) 00:10:03.110 18.015 - 18.110: 99.4498% ( 8) 00:10:03.110 18.110 - 18.204: 99.4870% ( 5) 00:10:03.110 18.204 - 18.299: 99.5167% ( 4) 00:10:03.110 18.299 - 18.394: 99.5613% ( 6) 00:10:03.110 18.394 - 18.489: 99.6357% ( 10) 00:10:03.110 18.489 - 18.584: 99.6654% ( 4) 00:10:03.110 18.584 - 18.679: 99.6952% ( 4) 00:10:03.110 18.679 - 18.773: 99.7026% ( 1) 00:10:03.110 18.773 - 18.868: 99.7323% ( 4) 00:10:03.110 18.868 - 18.963: 99.7546% ( 3) 00:10:03.110 18.963 - 19.058: 99.7621% ( 1) 00:10:03.110 19.058 - 19.153: 99.7918% ( 4) 00:10:03.110 19.153 - 19.247: 99.8216% ( 4) 00:10:03.110 19.247 - 19.342: 99.8290% ( 1) 00:10:03.110 19.342 - 19.437: 99.8364% ( 1) 00:10:03.110 19.437 - 19.532: 99.8513% ( 2) 00:10:03.110 19.532 - 19.627: 99.8662% ( 2) 00:10:03.110 19.627 - 19.721: 99.8736% ( 1) 00:10:03.110 21.713 - 21.807: 99.8810% ( 1) 00:10:03.110 23.704 - 23.799: 99.8885% ( 1) 00:10:03.110 29.582 - 29.772: 99.8959% ( 1) 00:10:03.110 3859.342 - 3883.615: 99.9033% ( 1) 00:10:03.110 3980.705 - 4004.978: 99.9851% ( 11) 00:10:03.110 4004.978 - 4029.250: 100.0000% ( 2) 00:10:03.110 00:10:03.110 Complete histogram 00:10:03.110 ================== 00:10:03.110 Range in us Cumulative Count 00:10:03.110 2.062 - 2.074: 0.0892% ( 12) 00:10:03.110 2.074 - 2.086: 16.6320% ( 2225) 00:10:03.110 2.086 - 2.098: 34.1041% ( 2350) 00:10:03.110 2.098 - 2.110: 36.3048% ( 296) 00:10:03.110 2.110 - 2.121: 52.1338% ( 2129) 00:10:03.110 2.121 - 2.133: 56.9219% ( 644) 00:10:03.110 2.133 - 2.145: 59.0186% ( 282) 00:10:03.110 2.145 - 2.157: 68.5576% ( 1283) 00:10:03.110 2.157 - 2.169: 72.9591% ( 592) 00:10:03.110 2.169 - 2.181: 74.6320% ( 225) 00:10:03.110 2.181 - 2.193: 80.7509% ( 823) 00:10:03.110 2.193 - 2.204: 82.6468% ( 255) 00:10:03.110 2.204 - 2.216: 83.8959% ( 168) 00:10:03.110 2.216 - 2.228: 88.8253% ( 663) 00:10:03.110 2.228 - 2.240: 90.7807% ( 263) 00:10:03.110 2.240 - 2.252: 91.3978% ( 83) 00:10:03.110 2.252 - 2.264: 93.2565% ( 250) 00:10:03.110 2.264 - 2.276: 94.0743% ( 110) 00:10:03.110 2.276 - 2.287: 94.4312% ( 48) 00:10:03.110 2.287 - 2.299: 95.1822% ( 101) 00:10:03.110 2.299 - 2.311: 95.5019% ( 43) 00:10:03.110 2.311 - 2.323: 95.6208% ( 16) 00:10:03.110 2.323 - 2.335: 95.7323% ( 15) 00:10:03.110 2.335 - 2.347: 95.8141% ( 11) 00:10:03.110 2.347 - 2.359: 95.9108% ( 13) 00:10:03.110 2.359 - 2.370: 96.1264% ( 29) 00:10:03.110 2.370 - 2.382: 96.4610% ( 45) 00:10:03.110 2.382 - 2.394: 96.6840% ( 30) 00:10:03.110 2.394 - 2.406: 96.9665% ( 38) 00:10:03.110 2.406 - 2.418: 97.1747% ( 28) 00:10:03.110 2.418 - 2.430: 97.3457% ( 23) 00:10:03.111 2.430 - 2.441: 97.5390% ( 26) 00:10:03.111 2.441 - 2.453: 97.6729% ( 18) 00:10:03.111 2.453 - 2.465: 97.8216% ( 20) 00:10:03.111 2.465 - 2.477: 97.8959% ( 10) 00:10:03.111 2.477 - 2.489: 97.9257% ( 4) 00:10:03.111 2.489 - 2.501: 97.9777% ( 7) 00:10:03.111 2.501 - 2.513: 98.0595% ( 11) 00:10:03.111 2.513 - 2.524: 98.0669% ( 1) 00:10:03.111 2.524 - 2.536: 98.1413% ( 10) 00:10:03.111 2.536 - 2.548: 98.1933% ( 7) 00:10:03.111 2.548 - 2.560: 98.2082% ( 2) 00:10:03.111 2.560 - 2.572: 98.2230% ( 2) 00:10:03.111 2.572 - 2.584: 98.2454% ( 3) 00:10:03.111 2.584 - 2.596: 98.2751% ( 4) 00:10:03.111 2.596 - 2.607: 98.2974% ( 3) 00:10:03.111 2.607 - 2.619: 98.3048% ( 1) 00:10:03.111 2.619 - 2.631: 98.3346% ( 4) 00:10:03.111 2.631 - 2.643: 98.3420% ( 1) 00:10:03.111 2.655 - 2.667: 98.3643% ( 3) 00:10:03.111 2.690 - 2.702: 98.3866% ( 3) 00:10:03.111 2.702 - 2.714: 98.4015% ( 2) 00:10:03.111 2.797 - 2.809: 98.4089% ( 1) 00:10:03.111 2.809 - 2.821: 98.4164% ( 1) 00:10:03.111 2.844 - 2.856: 98.4312% ( 2) 00:10:03.111 2.880 - 2.892: 98.4387% ( 1) 00:10:03.111 2.916 - 2.927: 98.4461% ( 1) 00:10:03.111 2.951 - 2.963: 98.4684% ( 3) 00:10:03.111 3.010 - 3.022: 98.4758% ( 1) 00:10:03.111 3.022 - 3.034: 98.4833% ( 1) 00:10:03.111 3.034 - 3.058: 98.4907% ( 1) 00:10:03.111 3.176 - 3.200: 98.4981% ( 1) 00:10:03.111 3.224 - 3.247: 98.5056% ( 1) 00:10:03.111 3.247 - 3.271: 98.5204% ( 2) 00:10:03.111 3.342 - 3.366: 98.5428% ( 3) 00:10:03.111 3.366 - 3.390: 9[2024-05-16 20:08:50.216387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:03.369 8.5502% ( 1) 00:10:03.369 3.390 - 3.413: 98.5576% ( 1) 00:10:03.369 3.413 - 3.437: 98.5725% ( 2) 00:10:03.369 3.437 - 3.461: 98.5799% ( 1) 00:10:03.369 3.484 - 3.508: 98.6022% ( 3) 00:10:03.369 3.508 - 3.532: 98.6097% ( 1) 00:10:03.369 3.532 - 3.556: 98.6245% ( 2) 00:10:03.369 3.579 - 3.603: 98.6320% ( 1) 00:10:03.369 3.603 - 3.627: 98.6394% ( 1) 00:10:03.369 3.650 - 3.674: 98.6468% ( 1) 00:10:03.369 3.674 - 3.698: 98.6543% ( 1) 00:10:03.369 3.698 - 3.721: 98.6691% ( 2) 00:10:03.369 3.769 - 3.793: 98.6914% ( 3) 00:10:03.369 3.793 - 3.816: 98.6989% ( 1) 00:10:03.369 3.816 - 3.840: 98.7212% ( 3) 00:10:03.369 3.864 - 3.887: 98.7361% ( 2) 00:10:03.369 3.887 - 3.911: 98.7435% ( 1) 00:10:03.369 4.077 - 4.101: 98.7509% ( 1) 00:10:03.369 4.693 - 4.717: 98.7584% ( 1) 00:10:03.369 5.689 - 5.713: 98.7658% ( 1) 00:10:03.369 5.784 - 5.807: 98.7732% ( 1) 00:10:03.369 5.879 - 5.902: 98.7807% ( 1) 00:10:03.369 5.926 - 5.950: 98.7881% ( 1) 00:10:03.369 6.116 - 6.163: 98.7955% ( 1) 00:10:03.369 6.163 - 6.210: 98.8030% ( 1) 00:10:03.369 6.779 - 6.827: 98.8253% ( 3) 00:10:03.369 6.874 - 6.921: 98.8327% ( 1) 00:10:03.369 6.921 - 6.969: 98.8550% ( 3) 00:10:03.369 7.111 - 7.159: 98.8625% ( 1) 00:10:03.369 7.301 - 7.348: 98.8699% ( 1) 00:10:03.369 7.348 - 7.396: 98.8773% ( 1) 00:10:03.369 7.538 - 7.585: 98.8848% ( 1) 00:10:03.369 7.727 - 7.775: 98.8922% ( 1) 00:10:03.369 8.059 - 8.107: 98.8996% ( 1) 00:10:03.369 8.154 - 8.201: 98.9071% ( 1) 00:10:03.369 8.770 - 8.818: 98.9145% ( 1) 00:10:03.369 9.339 - 9.387: 98.9219% ( 1) 00:10:03.369 9.671 - 9.719: 98.9294% ( 1) 00:10:03.369 12.136 - 12.231: 98.9368% ( 1) 00:10:03.369 15.550 - 15.644: 98.9442% ( 1) 00:10:03.369 15.644 - 15.739: 98.9517% ( 1) 00:10:03.369 15.739 - 15.834: 98.9591% ( 1) 00:10:03.369 15.834 - 15.929: 98.9888% ( 4) 00:10:03.369 15.929 - 16.024: 99.0112% ( 3) 00:10:03.369 16.024 - 16.119: 99.0335% ( 3) 00:10:03.369 16.119 - 16.213: 99.0632% ( 4) 00:10:03.369 16.213 - 16.308: 99.1078% ( 6) 00:10:03.369 16.308 - 16.403: 99.1301% ( 3) 00:10:03.369 16.498 - 16.593: 99.1450% ( 2) 00:10:03.369 16.593 - 16.687: 99.1822% ( 5) 00:10:03.369 16.687 - 16.782: 99.2193% ( 5) 00:10:03.369 16.782 - 16.877: 99.2937% ( 10) 00:10:03.369 16.877 - 16.972: 99.3160% ( 3) 00:10:03.369 16.972 - 17.067: 99.3457% ( 4) 00:10:03.369 17.067 - 17.161: 99.3606% ( 2) 00:10:03.369 17.161 - 17.256: 99.3829% ( 3) 00:10:03.369 17.636 - 17.730: 99.3903% ( 1) 00:10:03.369 17.730 - 17.825: 99.3978% ( 1) 00:10:03.369 17.825 - 17.920: 99.4052% ( 1) 00:10:03.369 18.110 - 18.204: 99.4201% ( 2) 00:10:03.369 18.679 - 18.773: 99.4275% ( 1) 00:10:03.369 20.101 - 20.196: 99.4349% ( 1) 00:10:03.369 3155.437 - 3179.710: 99.4424% ( 1) 00:10:03.369 3810.797 - 3835.070: 99.4498% ( 1) 00:10:03.369 3980.705 - 4004.978: 99.9480% ( 67) 00:10:03.369 4004.978 - 4029.250: 100.0000% ( 7) 00:10:03.369 00:10:03.369 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:03.369 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:03.369 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:03.369 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:03.369 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:03.627 [ 00:10:03.627 { 00:10:03.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:03.627 "subtype": "Discovery", 00:10:03.627 "listen_addresses": [], 00:10:03.627 "allow_any_host": true, 00:10:03.627 "hosts": [] 00:10:03.627 }, 00:10:03.627 { 00:10:03.627 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:03.627 "subtype": "NVMe", 00:10:03.627 "listen_addresses": [ 00:10:03.627 { 00:10:03.627 "trtype": "VFIOUSER", 00:10:03.627 "adrfam": "IPv4", 00:10:03.627 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:03.627 "trsvcid": "0" 00:10:03.627 } 00:10:03.627 ], 00:10:03.627 "allow_any_host": true, 00:10:03.627 "hosts": [], 00:10:03.627 "serial_number": "SPDK1", 00:10:03.627 "model_number": "SPDK bdev Controller", 00:10:03.627 "max_namespaces": 32, 00:10:03.627 "min_cntlid": 1, 00:10:03.627 "max_cntlid": 65519, 00:10:03.627 "namespaces": [ 00:10:03.627 { 00:10:03.627 "nsid": 1, 00:10:03.627 "bdev_name": "Malloc1", 00:10:03.627 "name": "Malloc1", 00:10:03.627 "nguid": "0792A8681B35471DBF6555CDE6792FD9", 00:10:03.627 "uuid": "0792a868-1b35-471d-bf65-55cde6792fd9" 00:10:03.627 } 00:10:03.627 ] 00:10:03.627 }, 00:10:03.627 { 00:10:03.627 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:03.627 "subtype": "NVMe", 00:10:03.627 "listen_addresses": [ 00:10:03.627 { 00:10:03.627 "trtype": "VFIOUSER", 00:10:03.627 "adrfam": "IPv4", 00:10:03.627 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:03.627 "trsvcid": "0" 00:10:03.627 } 00:10:03.627 ], 00:10:03.627 "allow_any_host": true, 00:10:03.627 "hosts": [], 00:10:03.628 "serial_number": "SPDK2", 00:10:03.628 "model_number": "SPDK bdev Controller", 00:10:03.628 "max_namespaces": 32, 00:10:03.628 "min_cntlid": 1, 00:10:03.628 "max_cntlid": 65519, 00:10:03.628 "namespaces": [ 00:10:03.628 { 00:10:03.628 "nsid": 1, 00:10:03.628 "bdev_name": "Malloc2", 00:10:03.628 "name": "Malloc2", 00:10:03.628 "nguid": "22F3B60170614A99A76EFA5F80E31564", 00:10:03.628 "uuid": "22f3b601-7061-4a99-a76e-fa5f80e31564" 00:10:03.628 } 00:10:03.628 ] 00:10:03.628 } 00:10:03.628 ] 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=160822 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:10:03.628 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:10:03.628 [2024-05-16 20:08:50.713424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:03.628 20:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:04.194 Malloc3 00:10:04.194 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:04.194 [2024-05-16 20:08:51.320707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:04.194 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:04.453 Asynchronous Event Request test 00:10:04.453 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:04.453 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:04.453 Registering asynchronous event callbacks... 00:10:04.453 Starting namespace attribute notice tests for all controllers... 00:10:04.453 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:04.453 aer_cb - Changed Namespace 00:10:04.453 Cleaning up... 00:10:04.453 [ 00:10:04.453 { 00:10:04.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:04.453 "subtype": "Discovery", 00:10:04.453 "listen_addresses": [], 00:10:04.453 "allow_any_host": true, 00:10:04.453 "hosts": [] 00:10:04.453 }, 00:10:04.453 { 00:10:04.453 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:04.453 "subtype": "NVMe", 00:10:04.453 "listen_addresses": [ 00:10:04.453 { 00:10:04.453 "trtype": "VFIOUSER", 00:10:04.453 "adrfam": "IPv4", 00:10:04.453 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:04.453 "trsvcid": "0" 00:10:04.453 } 00:10:04.453 ], 00:10:04.453 "allow_any_host": true, 00:10:04.453 "hosts": [], 00:10:04.453 "serial_number": "SPDK1", 00:10:04.453 "model_number": "SPDK bdev Controller", 00:10:04.453 "max_namespaces": 32, 00:10:04.453 "min_cntlid": 1, 00:10:04.453 "max_cntlid": 65519, 00:10:04.453 "namespaces": [ 00:10:04.453 { 00:10:04.453 "nsid": 1, 00:10:04.453 "bdev_name": "Malloc1", 00:10:04.453 "name": "Malloc1", 00:10:04.453 "nguid": "0792A8681B35471DBF6555CDE6792FD9", 00:10:04.453 "uuid": "0792a868-1b35-471d-bf65-55cde6792fd9" 00:10:04.453 }, 00:10:04.453 { 00:10:04.453 "nsid": 2, 00:10:04.453 "bdev_name": "Malloc3", 00:10:04.453 "name": "Malloc3", 00:10:04.453 "nguid": "72C1E492839849508A2C1F1FA97C9611", 00:10:04.453 "uuid": "72c1e492-8398-4950-8a2c-1f1fa97c9611" 00:10:04.453 } 00:10:04.453 ] 00:10:04.453 }, 00:10:04.453 { 00:10:04.453 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:04.453 "subtype": "NVMe", 00:10:04.453 "listen_addresses": [ 00:10:04.453 { 00:10:04.453 "trtype": "VFIOUSER", 00:10:04.453 "adrfam": "IPv4", 00:10:04.453 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:04.453 "trsvcid": "0" 00:10:04.453 } 00:10:04.453 ], 00:10:04.453 "allow_any_host": true, 00:10:04.453 "hosts": [], 00:10:04.453 "serial_number": "SPDK2", 00:10:04.453 "model_number": "SPDK bdev Controller", 00:10:04.453 "max_namespaces": 32, 00:10:04.453 "min_cntlid": 1, 00:10:04.453 "max_cntlid": 65519, 00:10:04.453 "namespaces": [ 00:10:04.453 { 00:10:04.453 "nsid": 1, 00:10:04.453 "bdev_name": "Malloc2", 00:10:04.453 "name": "Malloc2", 00:10:04.453 "nguid": "22F3B60170614A99A76EFA5F80E31564", 00:10:04.453 "uuid": "22f3b601-7061-4a99-a76e-fa5f80e31564" 00:10:04.453 } 00:10:04.453 ] 00:10:04.453 } 00:10:04.453 ] 00:10:04.453 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 160822 00:10:04.453 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:04.453 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:04.453 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:04.453 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:04.453 [2024-05-16 20:08:51.594141] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:04.453 [2024-05-16 20:08:51.594197] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160961 ] 00:10:04.713 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.713 [2024-05-16 20:08:51.626917] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:04.713 [2024-05-16 20:08:51.632272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:04.713 [2024-05-16 20:08:51.632301] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2fc5460000 00:10:04.713 [2024-05-16 20:08:51.633269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.634278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.635281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.636285] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.637291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.638298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.639311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.640320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:04.713 [2024-05-16 20:08:51.641328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:04.713 [2024-05-16 20:08:51.641353] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2fc5455000 00:10:04.713 [2024-05-16 20:08:51.642466] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:04.713 [2024-05-16 20:08:51.657098] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:04.713 [2024-05-16 20:08:51.657133] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:04.713 [2024-05-16 20:08:51.659234] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:04.713 [2024-05-16 20:08:51.659290] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:04.713 [2024-05-16 20:08:51.659382] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:04.713 [2024-05-16 20:08:51.659413] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:04.713 [2024-05-16 20:08:51.659424] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:04.713 [2024-05-16 20:08:51.660241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:04.713 [2024-05-16 20:08:51.660262] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:04.713 [2024-05-16 20:08:51.660275] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:04.713 [2024-05-16 20:08:51.661244] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:04.713 [2024-05-16 20:08:51.661265] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:04.713 [2024-05-16 20:08:51.661279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:04.713 [2024-05-16 20:08:51.662256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:04.713 [2024-05-16 20:08:51.662276] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:04.713 [2024-05-16 20:08:51.663256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:04.713 [2024-05-16 20:08:51.663276] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:04.713 [2024-05-16 20:08:51.663285] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:04.713 [2024-05-16 20:08:51.663297] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:04.713 [2024-05-16 20:08:51.663407] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:04.713 [2024-05-16 20:08:51.663415] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:04.713 [2024-05-16 20:08:51.663423] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:04.713 [2024-05-16 20:08:51.664265] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:04.713 [2024-05-16 20:08:51.665272] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:04.713 [2024-05-16 20:08:51.667883] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:04.713 [2024-05-16 20:08:51.668282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:04.713 [2024-05-16 20:08:51.668346] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:04.713 [2024-05-16 20:08:51.669313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:04.713 [2024-05-16 20:08:51.669333] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:04.713 [2024-05-16 20:08:51.669342] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:04.713 [2024-05-16 20:08:51.669369] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:04.713 [2024-05-16 20:08:51.669382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:04.713 [2024-05-16 20:08:51.669406] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:04.713 [2024-05-16 20:08:51.669415] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.713 [2024-05-16 20:08:51.669435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.713 [2024-05-16 20:08:51.678884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:04.713 [2024-05-16 20:08:51.678909] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:04.713 [2024-05-16 20:08:51.678919] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:04.713 [2024-05-16 20:08:51.678928] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:04.713 [2024-05-16 20:08:51.678936] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:04.713 [2024-05-16 20:08:51.678948] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:04.713 [2024-05-16 20:08:51.678958] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:04.713 [2024-05-16 20:08:51.678966] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:04.713 [2024-05-16 20:08:51.678980] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:04.713 [2024-05-16 20:08:51.678997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:04.713 [2024-05-16 20:08:51.686867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:04.713 [2024-05-16 20:08:51.686891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.713 [2024-05-16 20:08:51.686904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.713 [2024-05-16 20:08:51.686916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.713 [2024-05-16 20:08:51.686927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.713 [2024-05-16 20:08:51.686936] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.686952] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.686967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.694866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.694884] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:04.714 [2024-05-16 20:08:51.694898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.694910] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.694920] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.694934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.702861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.702924] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.702941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.702955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:04.714 [2024-05-16 20:08:51.702963] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:04.714 [2024-05-16 20:08:51.702973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.710863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.710888] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:04.714 [2024-05-16 20:08:51.710908] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.710924] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.710937] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:04.714 [2024-05-16 20:08:51.710946] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.714 [2024-05-16 20:08:51.710955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.718877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.718907] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.718924] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.718938] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:04.714 [2024-05-16 20:08:51.718946] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.714 [2024-05-16 20:08:51.718956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.726865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.726886] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.726898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.726919] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.726930] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.726939] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.726949] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:04.714 [2024-05-16 20:08:51.726956] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:04.714 [2024-05-16 20:08:51.726965] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:04.714 [2024-05-16 20:08:51.726994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.734867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.734892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.742865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.742889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.750861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.750886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.758865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.758891] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:04.714 [2024-05-16 20:08:51.758901] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:04.714 [2024-05-16 20:08:51.758907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:04.714 [2024-05-16 20:08:51.758913] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:04.714 [2024-05-16 20:08:51.758923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:04.714 [2024-05-16 20:08:51.758934] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:04.714 [2024-05-16 20:08:51.758943] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:04.714 [2024-05-16 20:08:51.758951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.758962] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:04.714 [2024-05-16 20:08:51.758970] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:04.714 [2024-05-16 20:08:51.758979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.758990] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:04.714 [2024-05-16 20:08:51.759002] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:04.714 [2024-05-16 20:08:51.759012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:04.714 [2024-05-16 20:08:51.766864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.766892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.766909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:04.714 [2024-05-16 20:08:51.766924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:04.714 ===================================================== 00:10:04.714 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:04.714 ===================================================== 00:10:04.714 Controller Capabilities/Features 00:10:04.714 ================================ 00:10:04.714 Vendor ID: 4e58 00:10:04.714 Subsystem Vendor ID: 4e58 00:10:04.714 Serial Number: SPDK2 00:10:04.714 Model Number: SPDK bdev Controller 00:10:04.714 Firmware Version: 24.09 00:10:04.714 Recommended Arb Burst: 6 00:10:04.714 IEEE OUI Identifier: 8d 6b 50 00:10:04.714 Multi-path I/O 00:10:04.714 May have multiple subsystem ports: Yes 00:10:04.714 May have multiple controllers: Yes 00:10:04.714 Associated with SR-IOV VF: No 00:10:04.714 Max Data Transfer Size: 131072 00:10:04.714 Max Number of Namespaces: 32 00:10:04.714 Max Number of I/O Queues: 127 00:10:04.714 NVMe Specification Version (VS): 1.3 00:10:04.714 NVMe Specification Version (Identify): 1.3 00:10:04.714 Maximum Queue Entries: 256 00:10:04.714 Contiguous Queues Required: Yes 00:10:04.714 Arbitration Mechanisms Supported 00:10:04.714 Weighted Round Robin: Not Supported 00:10:04.714 Vendor Specific: Not Supported 00:10:04.714 Reset Timeout: 15000 ms 00:10:04.714 Doorbell Stride: 4 bytes 00:10:04.714 NVM Subsystem Reset: Not Supported 00:10:04.714 Command Sets Supported 00:10:04.714 NVM Command Set: Supported 00:10:04.714 Boot Partition: Not Supported 00:10:04.714 Memory Page Size Minimum: 4096 bytes 00:10:04.714 Memory Page Size Maximum: 4096 bytes 00:10:04.715 Persistent Memory Region: Not Supported 00:10:04.715 Optional Asynchronous Events Supported 00:10:04.715 Namespace Attribute Notices: Supported 00:10:04.715 Firmware Activation Notices: Not Supported 00:10:04.715 ANA Change Notices: Not Supported 00:10:04.715 PLE Aggregate Log Change Notices: Not Supported 00:10:04.715 LBA Status Info Alert Notices: Not Supported 00:10:04.715 EGE Aggregate Log Change Notices: Not Supported 00:10:04.715 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.715 Zone Descriptor Change Notices: Not Supported 00:10:04.715 Discovery Log Change Notices: Not Supported 00:10:04.715 Controller Attributes 00:10:04.715 128-bit Host Identifier: Supported 00:10:04.715 Non-Operational Permissive Mode: Not Supported 00:10:04.715 NVM Sets: Not Supported 00:10:04.715 Read Recovery Levels: Not Supported 00:10:04.715 Endurance Groups: Not Supported 00:10:04.715 Predictable Latency Mode: Not Supported 00:10:04.715 Traffic Based Keep ALive: Not Supported 00:10:04.715 Namespace Granularity: Not Supported 00:10:04.715 SQ Associations: Not Supported 00:10:04.715 UUID List: Not Supported 00:10:04.715 Multi-Domain Subsystem: Not Supported 00:10:04.715 Fixed Capacity Management: Not Supported 00:10:04.715 Variable Capacity Management: Not Supported 00:10:04.715 Delete Endurance Group: Not Supported 00:10:04.715 Delete NVM Set: Not Supported 00:10:04.715 Extended LBA Formats Supported: Not Supported 00:10:04.715 Flexible Data Placement Supported: Not Supported 00:10:04.715 00:10:04.715 Controller Memory Buffer Support 00:10:04.715 ================================ 00:10:04.715 Supported: No 00:10:04.715 00:10:04.715 Persistent Memory Region Support 00:10:04.715 ================================ 00:10:04.715 Supported: No 00:10:04.715 00:10:04.715 Admin Command Set Attributes 00:10:04.715 ============================ 00:10:04.715 Security Send/Receive: Not Supported 00:10:04.715 Format NVM: Not Supported 00:10:04.715 Firmware Activate/Download: Not Supported 00:10:04.715 Namespace Management: Not Supported 00:10:04.715 Device Self-Test: Not Supported 00:10:04.715 Directives: Not Supported 00:10:04.715 NVMe-MI: Not Supported 00:10:04.715 Virtualization Management: Not Supported 00:10:04.715 Doorbell Buffer Config: Not Supported 00:10:04.715 Get LBA Status Capability: Not Supported 00:10:04.715 Command & Feature Lockdown Capability: Not Supported 00:10:04.715 Abort Command Limit: 4 00:10:04.715 Async Event Request Limit: 4 00:10:04.715 Number of Firmware Slots: N/A 00:10:04.715 Firmware Slot 1 Read-Only: N/A 00:10:04.715 Firmware Activation Without Reset: N/A 00:10:04.715 Multiple Update Detection Support: N/A 00:10:04.715 Firmware Update Granularity: No Information Provided 00:10:04.715 Per-Namespace SMART Log: No 00:10:04.715 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.715 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:04.715 Command Effects Log Page: Supported 00:10:04.715 Get Log Page Extended Data: Supported 00:10:04.715 Telemetry Log Pages: Not Supported 00:10:04.715 Persistent Event Log Pages: Not Supported 00:10:04.715 Supported Log Pages Log Page: May Support 00:10:04.715 Commands Supported & Effects Log Page: Not Supported 00:10:04.715 Feature Identifiers & Effects Log Page:May Support 00:10:04.715 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.715 Data Area 4 for Telemetry Log: Not Supported 00:10:04.715 Error Log Page Entries Supported: 128 00:10:04.715 Keep Alive: Supported 00:10:04.715 Keep Alive Granularity: 10000 ms 00:10:04.715 00:10:04.715 NVM Command Set Attributes 00:10:04.715 ========================== 00:10:04.715 Submission Queue Entry Size 00:10:04.715 Max: 64 00:10:04.715 Min: 64 00:10:04.715 Completion Queue Entry Size 00:10:04.715 Max: 16 00:10:04.715 Min: 16 00:10:04.715 Number of Namespaces: 32 00:10:04.715 Compare Command: Supported 00:10:04.715 Write Uncorrectable Command: Not Supported 00:10:04.715 Dataset Management Command: Supported 00:10:04.715 Write Zeroes Command: Supported 00:10:04.715 Set Features Save Field: Not Supported 00:10:04.715 Reservations: Not Supported 00:10:04.715 Timestamp: Not Supported 00:10:04.715 Copy: Supported 00:10:04.715 Volatile Write Cache: Present 00:10:04.715 Atomic Write Unit (Normal): 1 00:10:04.715 Atomic Write Unit (PFail): 1 00:10:04.715 Atomic Compare & Write Unit: 1 00:10:04.715 Fused Compare & Write: Supported 00:10:04.715 Scatter-Gather List 00:10:04.715 SGL Command Set: Supported (Dword aligned) 00:10:04.715 SGL Keyed: Not Supported 00:10:04.715 SGL Bit Bucket Descriptor: Not Supported 00:10:04.715 SGL Metadata Pointer: Not Supported 00:10:04.715 Oversized SGL: Not Supported 00:10:04.715 SGL Metadata Address: Not Supported 00:10:04.715 SGL Offset: Not Supported 00:10:04.715 Transport SGL Data Block: Not Supported 00:10:04.715 Replay Protected Memory Block: Not Supported 00:10:04.715 00:10:04.715 Firmware Slot Information 00:10:04.715 ========================= 00:10:04.715 Active slot: 1 00:10:04.715 Slot 1 Firmware Revision: 24.09 00:10:04.715 00:10:04.715 00:10:04.715 Commands Supported and Effects 00:10:04.715 ============================== 00:10:04.715 Admin Commands 00:10:04.715 -------------- 00:10:04.715 Get Log Page (02h): Supported 00:10:04.715 Identify (06h): Supported 00:10:04.715 Abort (08h): Supported 00:10:04.715 Set Features (09h): Supported 00:10:04.715 Get Features (0Ah): Supported 00:10:04.715 Asynchronous Event Request (0Ch): Supported 00:10:04.715 Keep Alive (18h): Supported 00:10:04.715 I/O Commands 00:10:04.715 ------------ 00:10:04.715 Flush (00h): Supported LBA-Change 00:10:04.715 Write (01h): Supported LBA-Change 00:10:04.715 Read (02h): Supported 00:10:04.715 Compare (05h): Supported 00:10:04.715 Write Zeroes (08h): Supported LBA-Change 00:10:04.715 Dataset Management (09h): Supported LBA-Change 00:10:04.715 Copy (19h): Supported LBA-Change 00:10:04.715 Unknown (79h): Supported LBA-Change 00:10:04.715 Unknown (7Ah): Supported 00:10:04.715 00:10:04.715 Error Log 00:10:04.715 ========= 00:10:04.715 00:10:04.715 Arbitration 00:10:04.715 =========== 00:10:04.715 Arbitration Burst: 1 00:10:04.715 00:10:04.715 Power Management 00:10:04.715 ================ 00:10:04.715 Number of Power States: 1 00:10:04.715 Current Power State: Power State #0 00:10:04.715 Power State #0: 00:10:04.715 Max Power: 0.00 W 00:10:04.715 Non-Operational State: Operational 00:10:04.715 Entry Latency: Not Reported 00:10:04.715 Exit Latency: Not Reported 00:10:04.715 Relative Read Throughput: 0 00:10:04.715 Relative Read Latency: 0 00:10:04.715 Relative Write Throughput: 0 00:10:04.715 Relative Write Latency: 0 00:10:04.715 Idle Power: Not Reported 00:10:04.715 Active Power: Not Reported 00:10:04.715 Non-Operational Permissive Mode: Not Supported 00:10:04.715 00:10:04.715 Health Information 00:10:04.715 ================== 00:10:04.715 Critical Warnings: 00:10:04.715 Available Spare Space: OK 00:10:04.715 Temperature: OK 00:10:04.715 Device Reliability: OK 00:10:04.715 Read Only: No 00:10:04.715 Volatile Memory Backup: OK 00:10:04.715 Current Temperature: 0 Kelvin (-2[2024-05-16 20:08:51.767041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:04.715 [2024-05-16 20:08:51.774864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:04.715 [2024-05-16 20:08:51.774909] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:04.715 [2024-05-16 20:08:51.774926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.715 [2024-05-16 20:08:51.774938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.715 [2024-05-16 20:08:51.774948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.715 [2024-05-16 20:08:51.774957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.715 [2024-05-16 20:08:51.775023] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:04.715 [2024-05-16 20:08:51.775044] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:04.716 [2024-05-16 20:08:51.776034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:04.716 [2024-05-16 20:08:51.776106] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:04.716 [2024-05-16 20:08:51.776121] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:04.716 [2024-05-16 20:08:51.777035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:04.716 [2024-05-16 20:08:51.777060] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:04.716 [2024-05-16 20:08:51.777114] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:04.716 [2024-05-16 20:08:51.778301] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:04.716 73 Celsius) 00:10:04.716 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:04.716 Available Spare: 0% 00:10:04.716 Available Spare Threshold: 0% 00:10:04.716 Life Percentage Used: 0% 00:10:04.716 Data Units Read: 0 00:10:04.716 Data Units Written: 0 00:10:04.716 Host Read Commands: 0 00:10:04.716 Host Write Commands: 0 00:10:04.716 Controller Busy Time: 0 minutes 00:10:04.716 Power Cycles: 0 00:10:04.716 Power On Hours: 0 hours 00:10:04.716 Unsafe Shutdowns: 0 00:10:04.716 Unrecoverable Media Errors: 0 00:10:04.716 Lifetime Error Log Entries: 0 00:10:04.716 Warning Temperature Time: 0 minutes 00:10:04.716 Critical Temperature Time: 0 minutes 00:10:04.716 00:10:04.716 Number of Queues 00:10:04.716 ================ 00:10:04.716 Number of I/O Submission Queues: 127 00:10:04.716 Number of I/O Completion Queues: 127 00:10:04.716 00:10:04.716 Active Namespaces 00:10:04.716 ================= 00:10:04.716 Namespace ID:1 00:10:04.716 Error Recovery Timeout: Unlimited 00:10:04.716 Command Set Identifier: NVM (00h) 00:10:04.716 Deallocate: Supported 00:10:04.716 Deallocated/Unwritten Error: Not Supported 00:10:04.716 Deallocated Read Value: Unknown 00:10:04.716 Deallocate in Write Zeroes: Not Supported 00:10:04.716 Deallocated Guard Field: 0xFFFF 00:10:04.716 Flush: Supported 00:10:04.716 Reservation: Supported 00:10:04.716 Namespace Sharing Capabilities: Multiple Controllers 00:10:04.716 Size (in LBAs): 131072 (0GiB) 00:10:04.716 Capacity (in LBAs): 131072 (0GiB) 00:10:04.716 Utilization (in LBAs): 131072 (0GiB) 00:10:04.716 NGUID: 22F3B60170614A99A76EFA5F80E31564 00:10:04.716 UUID: 22f3b601-7061-4a99-a76e-fa5f80e31564 00:10:04.716 Thin Provisioning: Not Supported 00:10:04.716 Per-NS Atomic Units: Yes 00:10:04.716 Atomic Boundary Size (Normal): 0 00:10:04.716 Atomic Boundary Size (PFail): 0 00:10:04.716 Atomic Boundary Offset: 0 00:10:04.716 Maximum Single Source Range Length: 65535 00:10:04.716 Maximum Copy Length: 65535 00:10:04.716 Maximum Source Range Count: 1 00:10:04.716 NGUID/EUI64 Never Reused: No 00:10:04.716 Namespace Write Protected: No 00:10:04.716 Number of LBA Formats: 1 00:10:04.716 Current LBA Format: LBA Format #00 00:10:04.716 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.716 00:10:04.716 20:08:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:04.716 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.974 [2024-05-16 20:08:52.006612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:10.238 Initializing NVMe Controllers 00:10:10.238 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:10.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:10.238 Initialization complete. Launching workers. 00:10:10.238 ======================================================== 00:10:10.238 Latency(us) 00:10:10.238 Device Information : IOPS MiB/s Average min max 00:10:10.238 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34884.11 136.27 3668.67 1187.86 7515.35 00:10:10.238 ======================================================== 00:10:10.238 Total : 34884.11 136.27 3668.67 1187.86 7515.35 00:10:10.238 00:10:10.238 [2024-05-16 20:08:57.118258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:10.238 20:08:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:10.238 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.238 [2024-05-16 20:08:57.359887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.501 Initializing NVMe Controllers 00:10:15.501 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:15.501 Initialization complete. Launching workers. 00:10:15.501 ======================================================== 00:10:15.501 Latency(us) 00:10:15.501 Device Information : IOPS MiB/s Average min max 00:10:15.501 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31628.80 123.55 4047.75 1240.50 7833.58 00:10:15.501 ======================================================== 00:10:15.501 Total : 31628.80 123.55 4047.75 1240.50 7833.58 00:10:15.501 00:10:15.501 [2024-05-16 20:09:02.381025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.501 20:09:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:15.501 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.501 [2024-05-16 20:09:02.593802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:20.764 [2024-05-16 20:09:07.732996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:20.764 Initializing NVMe Controllers 00:10:20.764 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:20.764 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:20.764 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:20.764 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:20.764 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:20.764 Initialization complete. Launching workers. 00:10:20.764 Starting thread on core 2 00:10:20.764 Starting thread on core 3 00:10:20.764 Starting thread on core 1 00:10:20.764 20:09:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:20.764 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.023 [2024-05-16 20:09:08.043343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:25.204 [2024-05-16 20:09:11.713120] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:25.204 Initializing NVMe Controllers 00:10:25.204 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:25.204 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:25.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:25.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:25.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:25.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:25.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:25.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:25.204 Initialization complete. Launching workers. 00:10:25.204 Starting thread on core 1 with urgent priority queue 00:10:25.204 Starting thread on core 2 with urgent priority queue 00:10:25.204 Starting thread on core 3 with urgent priority queue 00:10:25.204 Starting thread on core 0 with urgent priority queue 00:10:25.204 SPDK bdev Controller (SPDK2 ) core 0: 1340.33 IO/s 74.61 secs/100000 ios 00:10:25.204 SPDK bdev Controller (SPDK2 ) core 1: 1359.67 IO/s 73.55 secs/100000 ios 00:10:25.204 SPDK bdev Controller (SPDK2 ) core 2: 1418.33 IO/s 70.51 secs/100000 ios 00:10:25.204 SPDK bdev Controller (SPDK2 ) core 3: 1095.33 IO/s 91.30 secs/100000 ios 00:10:25.204 ======================================================== 00:10:25.204 00:10:25.204 20:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:25.204 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.204 [2024-05-16 20:09:12.019341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:25.204 Initializing NVMe Controllers 00:10:25.204 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:25.204 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:25.204 Namespace ID: 1 size: 0GB 00:10:25.204 Initialization complete. 00:10:25.204 INFO: using host memory buffer for IO 00:10:25.204 Hello world! 00:10:25.204 [2024-05-16 20:09:12.028511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:25.204 20:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:25.204 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.204 [2024-05-16 20:09:12.316673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.584 Initializing NVMe Controllers 00:10:26.585 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:26.585 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:26.585 Initialization complete. Launching workers. 00:10:26.585 submit (in ns) avg, min, max = 7765.2, 3522.2, 4019772.2 00:10:26.585 complete (in ns) avg, min, max = 26111.9, 2064.4, 4091714.4 00:10:26.585 00:10:26.585 Submit histogram 00:10:26.585 ================ 00:10:26.585 Range in us Cumulative Count 00:10:26.585 3.508 - 3.532: 0.0813% ( 11) 00:10:26.585 3.532 - 3.556: 1.0642% ( 133) 00:10:26.585 3.556 - 3.579: 3.6583% ( 351) 00:10:26.585 3.579 - 3.603: 9.2380% ( 755) 00:10:26.585 3.603 - 3.627: 16.5694% ( 992) 00:10:26.585 3.627 - 3.650: 25.4009% ( 1195) 00:10:26.585 3.650 - 3.674: 33.3900% ( 1081) 00:10:26.585 3.674 - 3.698: 42.1181% ( 1181) 00:10:26.585 3.698 - 3.721: 49.8633% ( 1048) 00:10:26.585 3.721 - 3.745: 57.3794% ( 1017) 00:10:26.585 3.745 - 3.769: 61.3924% ( 543) 00:10:26.585 3.769 - 3.793: 64.7033% ( 448) 00:10:26.585 3.793 - 3.816: 67.8146% ( 421) 00:10:26.585 3.816 - 3.840: 70.8669% ( 413) 00:10:26.585 3.840 - 3.864: 74.5399% ( 497) 00:10:26.585 3.864 - 3.887: 78.1613% ( 490) 00:10:26.585 3.887 - 3.911: 81.4500% ( 445) 00:10:26.585 3.911 - 3.935: 84.7461% ( 446) 00:10:26.585 3.935 - 3.959: 87.1332% ( 323) 00:10:26.585 3.959 - 3.982: 88.9882% ( 251) 00:10:26.585 3.982 - 4.006: 90.6807% ( 229) 00:10:26.585 4.006 - 4.030: 91.9518% ( 172) 00:10:26.585 4.030 - 4.053: 93.0086% ( 143) 00:10:26.585 4.053 - 4.077: 93.8955% ( 120) 00:10:26.585 4.077 - 4.101: 94.5532% ( 89) 00:10:26.585 4.101 - 4.124: 95.1667% ( 83) 00:10:26.585 4.124 - 4.148: 95.5731% ( 55) 00:10:26.585 4.148 - 4.172: 96.0387% ( 63) 00:10:26.585 4.172 - 4.196: 96.3270% ( 39) 00:10:26.585 4.196 - 4.219: 96.5339% ( 28) 00:10:26.585 4.219 - 4.243: 96.6078% ( 10) 00:10:26.585 4.243 - 4.267: 96.7630% ( 21) 00:10:26.585 4.267 - 4.290: 96.8738% ( 15) 00:10:26.585 4.290 - 4.314: 96.9625% ( 12) 00:10:26.585 4.314 - 4.338: 97.0438% ( 11) 00:10:26.585 4.338 - 4.361: 97.1251% ( 11) 00:10:26.585 4.361 - 4.385: 97.2212% ( 13) 00:10:26.585 4.385 - 4.409: 97.2803% ( 8) 00:10:26.585 4.409 - 4.433: 97.3394% ( 8) 00:10:26.585 4.433 - 4.456: 97.3912% ( 7) 00:10:26.585 4.456 - 4.480: 97.4355% ( 6) 00:10:26.585 4.480 - 4.504: 97.4429% ( 1) 00:10:26.585 4.504 - 4.527: 97.4577% ( 2) 00:10:26.585 4.527 - 4.551: 97.4725% ( 2) 00:10:26.585 4.551 - 4.575: 97.4873% ( 2) 00:10:26.585 4.646 - 4.670: 97.4946% ( 1) 00:10:26.585 4.670 - 4.693: 97.5020% ( 1) 00:10:26.585 4.717 - 4.741: 97.5168% ( 2) 00:10:26.585 4.741 - 4.764: 97.5316% ( 2) 00:10:26.585 4.788 - 4.812: 97.5390% ( 1) 00:10:26.585 4.836 - 4.859: 97.5538% ( 2) 00:10:26.585 4.859 - 4.883: 97.5759% ( 3) 00:10:26.585 4.883 - 4.907: 97.6129% ( 5) 00:10:26.585 4.907 - 4.930: 97.6498% ( 5) 00:10:26.585 4.930 - 4.954: 97.6720% ( 3) 00:10:26.585 4.954 - 4.978: 97.7016% ( 4) 00:10:26.585 4.978 - 5.001: 97.7681% ( 9) 00:10:26.585 5.001 - 5.025: 97.8124% ( 6) 00:10:26.585 5.025 - 5.049: 97.8789% ( 9) 00:10:26.585 5.049 - 5.073: 97.9159% ( 5) 00:10:26.585 5.073 - 5.096: 97.9824% ( 9) 00:10:26.585 5.096 - 5.120: 98.0415% ( 8) 00:10:26.585 5.120 - 5.144: 98.0933% ( 7) 00:10:26.585 5.144 - 5.167: 98.1228% ( 4) 00:10:26.585 5.167 - 5.191: 98.1524% ( 4) 00:10:26.585 5.191 - 5.215: 98.1820% ( 4) 00:10:26.585 5.215 - 5.239: 98.2115% ( 4) 00:10:26.585 5.239 - 5.262: 98.2411% ( 4) 00:10:26.585 5.262 - 5.286: 98.2780% ( 5) 00:10:26.585 5.286 - 5.310: 98.3002% ( 3) 00:10:26.585 5.310 - 5.333: 98.3076% ( 1) 00:10:26.585 5.333 - 5.357: 98.3224% ( 2) 00:10:26.585 5.357 - 5.381: 98.3372% ( 2) 00:10:26.585 5.381 - 5.404: 98.3519% ( 2) 00:10:26.585 5.428 - 5.452: 98.3741% ( 3) 00:10:26.585 5.452 - 5.476: 98.3889% ( 2) 00:10:26.585 5.547 - 5.570: 98.3963% ( 1) 00:10:26.585 5.570 - 5.594: 98.4111% ( 2) 00:10:26.585 5.594 - 5.618: 98.4332% ( 3) 00:10:26.585 5.689 - 5.713: 98.4406% ( 1) 00:10:26.585 5.736 - 5.760: 98.4480% ( 1) 00:10:26.585 5.760 - 5.784: 98.4628% ( 2) 00:10:26.585 5.831 - 5.855: 98.4702% ( 1) 00:10:26.585 5.902 - 5.926: 98.4850% ( 2) 00:10:26.585 5.973 - 5.997: 98.4924% ( 1) 00:10:26.585 5.997 - 6.021: 98.4997% ( 1) 00:10:26.585 6.163 - 6.210: 98.5071% ( 1) 00:10:26.585 6.258 - 6.305: 98.5145% ( 1) 00:10:26.585 6.447 - 6.495: 98.5219% ( 1) 00:10:26.585 6.921 - 6.969: 98.5293% ( 1) 00:10:26.585 7.064 - 7.111: 98.5367% ( 1) 00:10:26.585 7.111 - 7.159: 98.5441% ( 1) 00:10:26.585 7.159 - 7.206: 98.5515% ( 1) 00:10:26.585 7.253 - 7.301: 98.5589% ( 1) 00:10:26.585 7.348 - 7.396: 98.5736% ( 2) 00:10:26.585 7.396 - 7.443: 98.5884% ( 2) 00:10:26.585 7.443 - 7.490: 98.6032% ( 2) 00:10:26.585 7.490 - 7.538: 98.6106% ( 1) 00:10:26.585 7.585 - 7.633: 98.6180% ( 1) 00:10:26.585 7.633 - 7.680: 98.6254% ( 1) 00:10:26.585 7.680 - 7.727: 98.6328% ( 1) 00:10:26.585 7.775 - 7.822: 98.6402% ( 1) 00:10:26.585 7.822 - 7.870: 98.6476% ( 1) 00:10:26.585 7.870 - 7.917: 98.6549% ( 1) 00:10:26.585 7.964 - 8.012: 98.6771% ( 3) 00:10:26.585 8.012 - 8.059: 98.6919% ( 2) 00:10:26.585 8.059 - 8.107: 98.7067% ( 2) 00:10:26.585 8.154 - 8.201: 98.7141% ( 1) 00:10:26.585 8.249 - 8.296: 98.7288% ( 2) 00:10:26.585 8.296 - 8.344: 98.7584% ( 4) 00:10:26.585 8.344 - 8.391: 98.7658% ( 1) 00:10:26.585 8.391 - 8.439: 98.7732% ( 1) 00:10:26.585 8.439 - 8.486: 98.7880% ( 2) 00:10:26.585 8.676 - 8.723: 98.8101% ( 3) 00:10:26.585 8.723 - 8.770: 98.8175% ( 1) 00:10:26.585 8.770 - 8.818: 98.8249% ( 1) 00:10:26.585 8.818 - 8.865: 98.8397% ( 2) 00:10:26.585 8.865 - 8.913: 98.8471% ( 1) 00:10:26.585 9.007 - 9.055: 98.8619% ( 2) 00:10:26.585 9.102 - 9.150: 98.8693% ( 1) 00:10:26.585 9.150 - 9.197: 98.8767% ( 1) 00:10:26.585 9.244 - 9.292: 98.8840% ( 1) 00:10:26.585 9.292 - 9.339: 98.8914% ( 1) 00:10:26.585 9.481 - 9.529: 98.8988% ( 1) 00:10:26.585 9.576 - 9.624: 98.9136% ( 2) 00:10:26.585 9.624 - 9.671: 98.9210% ( 1) 00:10:26.585 9.861 - 9.908: 98.9506% ( 4) 00:10:26.585 9.956 - 10.003: 98.9579% ( 1) 00:10:26.585 10.050 - 10.098: 98.9653% ( 1) 00:10:26.585 10.098 - 10.145: 98.9727% ( 1) 00:10:26.585 10.193 - 10.240: 98.9875% ( 2) 00:10:26.585 10.240 - 10.287: 99.0023% ( 2) 00:10:26.585 10.335 - 10.382: 99.0097% ( 1) 00:10:26.585 10.430 - 10.477: 99.0171% ( 1) 00:10:26.586 10.477 - 10.524: 99.0392% ( 3) 00:10:26.586 11.425 - 11.473: 99.0466% ( 1) 00:10:26.586 11.473 - 11.520: 99.0540% ( 1) 00:10:26.586 11.520 - 11.567: 99.0614% ( 1) 00:10:26.586 11.567 - 11.615: 99.0688% ( 1) 00:10:26.586 11.947 - 11.994: 99.0762% ( 1) 00:10:26.586 12.041 - 12.089: 99.0836% ( 1) 00:10:26.586 12.136 - 12.231: 99.0984% ( 2) 00:10:26.586 12.326 - 12.421: 99.1058% ( 1) 00:10:26.586 12.895 - 12.990: 99.1131% ( 1) 00:10:26.586 12.990 - 13.084: 99.1205% ( 1) 00:10:26.586 13.084 - 13.179: 99.1279% ( 1) 00:10:26.586 13.653 - 13.748: 99.1353% ( 1) 00:10:26.586 13.748 - 13.843: 99.1501% ( 2) 00:10:26.586 14.033 - 14.127: 99.1575% ( 1) 00:10:26.586 14.127 - 14.222: 99.1649% ( 1) 00:10:26.586 14.317 - 14.412: 99.1723% ( 1) 00:10:26.586 14.507 - 14.601: 99.1797% ( 1) 00:10:26.586 14.886 - 14.981: 99.1871% ( 1) 00:10:26.586 15.076 - 15.170: 99.1944% ( 1) 00:10:26.586 15.360 - 15.455: 99.2018% ( 1) 00:10:26.586 15.550 - 15.644: 99.2092% ( 1) 00:10:26.586 17.351 - 17.446: 99.2240% ( 2) 00:10:26.586 17.636 - 17.730: 99.2314% ( 1) 00:10:26.586 17.730 - 17.825: 99.2683% ( 5) 00:10:26.586 17.825 - 17.920: 99.3275% ( 8) 00:10:26.586 17.920 - 18.015: 99.4014% ( 10) 00:10:26.586 18.015 - 18.110: 99.4605% ( 8) 00:10:26.586 18.110 - 18.204: 99.5270% ( 9) 00:10:26.586 18.204 - 18.299: 99.5787% ( 7) 00:10:26.586 18.299 - 18.394: 99.6009% ( 3) 00:10:26.586 18.394 - 18.489: 99.6453% ( 6) 00:10:26.586 18.489 - 18.584: 99.6896% ( 6) 00:10:26.586 18.584 - 18.679: 99.7709% ( 11) 00:10:26.586 18.679 - 18.773: 99.8078% ( 5) 00:10:26.586 18.773 - 18.868: 99.8300% ( 3) 00:10:26.586 18.963 - 19.058: 99.8374% ( 1) 00:10:26.586 19.058 - 19.153: 99.8522% ( 2) 00:10:26.586 19.153 - 19.247: 99.8670% ( 2) 00:10:26.586 19.247 - 19.342: 99.8744% ( 1) 00:10:26.586 19.342 - 19.437: 99.8818% ( 1) 00:10:26.586 19.437 - 19.532: 99.8891% ( 1) 00:10:26.586 19.721 - 19.816: 99.8965% ( 1) 00:10:26.586 25.410 - 25.600: 99.9039% ( 1) 00:10:26.586 3980.705 - 4004.978: 99.9557% ( 7) 00:10:26.586 4004.978 - 4029.250: 100.0000% ( 6) 00:10:26.586 00:10:26.586 Complete histogram 00:10:26.586 ================== 00:10:26.586 Range in us Cumulative Count 00:10:26.586 2.062 - 2.074: 3.1040% ( 420) 00:10:26.586 2.074 - 2.086: 24.9353% ( 2954) 00:10:26.586 2.086 - 2.098: 28.2019% ( 442) 00:10:26.586 2.098 - 2.110: 42.9458% ( 1995) 00:10:26.586 2.110 - 2.121: 60.4981% ( 2375) 00:10:26.586 2.121 - 2.133: 62.7079% ( 299) 00:10:26.586 2.133 - 2.145: 67.1199% ( 597) 00:10:26.586 2.145 - 2.157: 73.0766% ( 806) 00:10:26.586 2.157 - 2.169: 74.2887% ( 164) 00:10:26.586 2.169 - 2.181: 80.5853% ( 852) 00:10:26.586 2.181 - 2.193: 85.8030% ( 706) 00:10:26.586 2.193 - 2.204: 87.0667% ( 171) 00:10:26.586 2.204 - 2.216: 89.1508% ( 282) 00:10:26.586 2.216 - 2.228: 91.4049% ( 305) 00:10:26.586 2.228 - 2.240: 92.1292% ( 98) 00:10:26.586 2.240 - 2.252: 93.3264% ( 162) 00:10:26.586 2.252 - 2.264: 94.5532% ( 166) 00:10:26.586 2.264 - 2.276: 94.8489% ( 40) 00:10:26.586 2.276 - 2.287: 95.2849% ( 59) 00:10:26.586 2.287 - 2.299: 95.5214% ( 32) 00:10:26.586 2.299 - 2.311: 95.6396% ( 16) 00:10:26.586 2.311 - 2.323: 95.7062% ( 9) 00:10:26.586 2.323 - 2.335: 95.8244% ( 16) 00:10:26.586 2.335 - 2.347: 95.8835% ( 8) 00:10:26.586 2.347 - 2.359: 95.9944% ( 15) 00:10:26.586 2.359 - 2.370: 96.0978% ( 14) 00:10:26.586 2.370 - 2.382: 96.2457% ( 20) 00:10:26.586 2.382 - 2.394: 96.3713% ( 17) 00:10:26.586 2.394 - 2.406: 96.5339% ( 22) 00:10:26.586 2.406 - 2.418: 96.7186% ( 25) 00:10:26.586 2.418 - 2.430: 96.9551% ( 32) 00:10:26.586 2.430 - 2.441: 97.1547% ( 27) 00:10:26.586 2.441 - 2.453: 97.3764% ( 30) 00:10:26.586 2.453 - 2.465: 97.5759% ( 27) 00:10:26.586 2.465 - 2.477: 97.7311% ( 21) 00:10:26.586 2.477 - 2.489: 97.8642% ( 18) 00:10:26.586 2.489 - 2.501: 98.0120% ( 20) 00:10:26.586 2.501 - 2.513: 98.1154% ( 14) 00:10:26.586 2.513 - 2.524: 98.1524% ( 5) 00:10:26.586 2.524 - 2.536: 98.2189% ( 9) 00:10:26.586 2.536 - 2.548: 98.2780% ( 8) 00:10:26.586 2.548 - 2.560: 98.3372% ( 8) 00:10:26.586 2.560 - 2.572: 98.3815% ( 6) 00:10:26.586 2.572 - 2.584: 98.4111% ( 4) 00:10:26.586 2.584 - 2.596: 98.4554% ( 6) 00:10:26.586 2.596 - 2.607: 98.4702% ( 2) 00:10:26.586 2.607 - 2.619: 98.4924% ( 3) 00:10:26.586 2.631 - 2.643: 98.4997% ( 1) 00:10:26.586 2.643 - 2.655: 98.5145% ( 2) 00:10:26.586 2.667 - 2.679: 98.5219% ( 1) 00:10:26.586 2.679 - 2.690: 98.5293% ( 1) 00:10:26.586 2.702 - 2.714: 98.5441% ( 2) 00:10:26.586 2.726 - 2.738: 98.5515% ( 1) 00:10:26.586 2.738 - 2.750: 98.5663% ( 2) 00:10:26.586 2.761 - 2.773: 98.5736% ( 1) 00:10:26.586 2.797 - 2.809: 98.5810% ( 1) 00:10:26.586 2.809 - 2.821: 98.5884% ( 1) 00:10:26.586 2.821 - 2.833: 98.5958% ( 1) 00:10:26.586 2.880 - 2.892: 98.6032% ( 1) 00:10:26.586 2.939 - 2.951: 98.6180% ( 2) 00:10:26.586 2.963 - 2.975: 98.6254% ( 1) 00:10:26.586 3.058 - 3.081: 98.6328% ( 1) 00:10:26.586 3.081 - 3.105: 98.6402% ( 1) 00:10:26.586 3.461 - 3.484: 98.6476% ( 1) 00:10:26.586 3.556 - 3.579: 98.6549% ( 1) 00:10:26.586 3.579 - 3.603: 98.6623% ( 1) 00:10:26.586 3.603 - 3.627: 98.6771% ( 2) 00:10:26.586 3.650 - 3.674: 98.6845% ( 1) 00:10:26.586 3.721 - 3.745: 98.6919% ( 1) 00:10:26.586 3.745 - 3.769: 98.7141% ( 3) 00:10:26.586 3.793 - 3.816: 98.7288% ( 2) 00:10:26.586 3.840 - 3.864: 9[2024-05-16 20:09:13.418627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:26.586 8.7362% ( 1) 00:10:26.586 3.864 - 3.887: 98.7510% ( 2) 00:10:26.586 3.959 - 3.982: 98.7584% ( 1) 00:10:26.586 4.030 - 4.053: 98.7658% ( 1) 00:10:26.586 4.077 - 4.101: 98.7732% ( 1) 00:10:26.586 4.148 - 4.172: 98.7880% ( 2) 00:10:26.586 4.172 - 4.196: 98.7954% ( 1) 00:10:26.586 4.196 - 4.219: 98.8101% ( 2) 00:10:26.586 4.219 - 4.243: 98.8175% ( 1) 00:10:26.586 5.239 - 5.262: 98.8249% ( 1) 00:10:26.586 5.547 - 5.570: 98.8323% ( 1) 00:10:26.586 5.784 - 5.807: 98.8397% ( 1) 00:10:26.586 6.068 - 6.116: 98.8471% ( 1) 00:10:26.586 6.116 - 6.163: 98.8545% ( 1) 00:10:26.586 6.590 - 6.637: 98.8619% ( 1) 00:10:26.586 6.732 - 6.779: 98.8693% ( 1) 00:10:26.586 6.779 - 6.827: 98.8767% ( 1) 00:10:26.586 6.874 - 6.921: 98.8840% ( 1) 00:10:26.586 7.775 - 7.822: 98.8914% ( 1) 00:10:26.586 7.870 - 7.917: 98.8988% ( 1) 00:10:26.586 8.059 - 8.107: 98.9062% ( 1) 00:10:26.586 8.249 - 8.296: 98.9136% ( 1) 00:10:26.586 8.344 - 8.391: 98.9210% ( 1) 00:10:26.586 9.055 - 9.102: 98.9284% ( 1) 00:10:26.586 9.576 - 9.624: 98.9358% ( 1) 00:10:26.586 15.360 - 15.455: 98.9432% ( 1) 00:10:26.586 15.550 - 15.644: 98.9579% ( 2) 00:10:26.586 15.644 - 15.739: 98.9801% ( 3) 00:10:26.586 15.739 - 15.834: 98.9875% ( 1) 00:10:26.586 15.834 - 15.929: 98.9949% ( 1) 00:10:26.586 15.929 - 16.024: 99.0171% ( 3) 00:10:26.586 16.024 - 16.119: 99.0319% ( 2) 00:10:26.586 16.119 - 16.213: 99.0392% ( 1) 00:10:26.586 16.213 - 16.308: 99.0540% ( 2) 00:10:26.586 16.308 - 16.403: 99.0614% ( 1) 00:10:26.586 16.403 - 16.498: 99.0984% ( 5) 00:10:26.586 16.498 - 16.593: 99.1205% ( 3) 00:10:26.587 16.593 - 16.687: 99.1427% ( 3) 00:10:26.587 16.687 - 16.782: 99.1797% ( 5) 00:10:26.587 16.782 - 16.877: 99.2166% ( 5) 00:10:26.587 16.877 - 16.972: 99.2462% ( 4) 00:10:26.587 16.972 - 17.067: 99.2536% ( 1) 00:10:26.587 17.067 - 17.161: 99.2683% ( 2) 00:10:26.587 17.161 - 17.256: 99.2905% ( 3) 00:10:26.587 17.256 - 17.351: 99.3053% ( 2) 00:10:26.587 17.351 - 17.446: 99.3201% ( 2) 00:10:26.587 17.446 - 17.541: 99.3423% ( 3) 00:10:26.587 17.636 - 17.730: 99.3496% ( 1) 00:10:26.587 17.825 - 17.920: 99.3718% ( 3) 00:10:26.587 18.015 - 18.110: 99.3792% ( 1) 00:10:26.587 18.773 - 18.868: 99.3866% ( 1) 00:10:26.587 23.419 - 23.514: 99.3940% ( 1) 00:10:26.587 29.582 - 29.772: 99.4014% ( 1) 00:10:26.587 3009.801 - 3021.938: 99.4088% ( 1) 00:10:26.587 3956.433 - 3980.705: 99.4162% ( 1) 00:10:26.587 3980.705 - 4004.978: 99.7931% ( 51) 00:10:26.587 4004.978 - 4029.250: 99.9926% ( 27) 00:10:26.587 4077.796 - 4102.068: 100.0000% ( 1) 00:10:26.587 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:26.587 [ 00:10:26.587 { 00:10:26.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:26.587 "subtype": "Discovery", 00:10:26.587 "listen_addresses": [], 00:10:26.587 "allow_any_host": true, 00:10:26.587 "hosts": [] 00:10:26.587 }, 00:10:26.587 { 00:10:26.587 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:26.587 "subtype": "NVMe", 00:10:26.587 "listen_addresses": [ 00:10:26.587 { 00:10:26.587 "trtype": "VFIOUSER", 00:10:26.587 "adrfam": "IPv4", 00:10:26.587 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:26.587 "trsvcid": "0" 00:10:26.587 } 00:10:26.587 ], 00:10:26.587 "allow_any_host": true, 00:10:26.587 "hosts": [], 00:10:26.587 "serial_number": "SPDK1", 00:10:26.587 "model_number": "SPDK bdev Controller", 00:10:26.587 "max_namespaces": 32, 00:10:26.587 "min_cntlid": 1, 00:10:26.587 "max_cntlid": 65519, 00:10:26.587 "namespaces": [ 00:10:26.587 { 00:10:26.587 "nsid": 1, 00:10:26.587 "bdev_name": "Malloc1", 00:10:26.587 "name": "Malloc1", 00:10:26.587 "nguid": "0792A8681B35471DBF6555CDE6792FD9", 00:10:26.587 "uuid": "0792a868-1b35-471d-bf65-55cde6792fd9" 00:10:26.587 }, 00:10:26.587 { 00:10:26.587 "nsid": 2, 00:10:26.587 "bdev_name": "Malloc3", 00:10:26.587 "name": "Malloc3", 00:10:26.587 "nguid": "72C1E492839849508A2C1F1FA97C9611", 00:10:26.587 "uuid": "72c1e492-8398-4950-8a2c-1f1fa97c9611" 00:10:26.587 } 00:10:26.587 ] 00:10:26.587 }, 00:10:26.587 { 00:10:26.587 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:26.587 "subtype": "NVMe", 00:10:26.587 "listen_addresses": [ 00:10:26.587 { 00:10:26.587 "trtype": "VFIOUSER", 00:10:26.587 "adrfam": "IPv4", 00:10:26.587 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:26.587 "trsvcid": "0" 00:10:26.587 } 00:10:26.587 ], 00:10:26.587 "allow_any_host": true, 00:10:26.587 "hosts": [], 00:10:26.587 "serial_number": "SPDK2", 00:10:26.587 "model_number": "SPDK bdev Controller", 00:10:26.587 "max_namespaces": 32, 00:10:26.587 "min_cntlid": 1, 00:10:26.587 "max_cntlid": 65519, 00:10:26.587 "namespaces": [ 00:10:26.587 { 00:10:26.587 "nsid": 1, 00:10:26.587 "bdev_name": "Malloc2", 00:10:26.587 "name": "Malloc2", 00:10:26.587 "nguid": "22F3B60170614A99A76EFA5F80E31564", 00:10:26.587 "uuid": "22f3b601-7061-4a99-a76e-fa5f80e31564" 00:10:26.587 } 00:10:26.587 ] 00:10:26.587 } 00:10:26.587 ] 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=164115 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:10:26.587 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:10:26.850 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:10:26.850 [2024-05-16 20:09:13.871331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:26.850 20:09:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:27.107 Malloc4 00:10:27.108 20:09:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:27.365 [2024-05-16 20:09:14.443486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:27.365 20:09:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:27.365 Asynchronous Event Request test 00:10:27.365 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:27.365 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:27.365 Registering asynchronous event callbacks... 00:10:27.365 Starting namespace attribute notice tests for all controllers... 00:10:27.365 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:27.365 aer_cb - Changed Namespace 00:10:27.365 Cleaning up... 00:10:27.622 [ 00:10:27.622 { 00:10:27.622 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:27.622 "subtype": "Discovery", 00:10:27.622 "listen_addresses": [], 00:10:27.622 "allow_any_host": true, 00:10:27.622 "hosts": [] 00:10:27.622 }, 00:10:27.622 { 00:10:27.622 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:27.622 "subtype": "NVMe", 00:10:27.622 "listen_addresses": [ 00:10:27.622 { 00:10:27.622 "trtype": "VFIOUSER", 00:10:27.622 "adrfam": "IPv4", 00:10:27.622 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:27.622 "trsvcid": "0" 00:10:27.622 } 00:10:27.622 ], 00:10:27.622 "allow_any_host": true, 00:10:27.622 "hosts": [], 00:10:27.622 "serial_number": "SPDK1", 00:10:27.622 "model_number": "SPDK bdev Controller", 00:10:27.622 "max_namespaces": 32, 00:10:27.622 "min_cntlid": 1, 00:10:27.622 "max_cntlid": 65519, 00:10:27.622 "namespaces": [ 00:10:27.622 { 00:10:27.622 "nsid": 1, 00:10:27.622 "bdev_name": "Malloc1", 00:10:27.622 "name": "Malloc1", 00:10:27.622 "nguid": "0792A8681B35471DBF6555CDE6792FD9", 00:10:27.622 "uuid": "0792a868-1b35-471d-bf65-55cde6792fd9" 00:10:27.622 }, 00:10:27.622 { 00:10:27.622 "nsid": 2, 00:10:27.622 "bdev_name": "Malloc3", 00:10:27.622 "name": "Malloc3", 00:10:27.622 "nguid": "72C1E492839849508A2C1F1FA97C9611", 00:10:27.622 "uuid": "72c1e492-8398-4950-8a2c-1f1fa97c9611" 00:10:27.622 } 00:10:27.622 ] 00:10:27.622 }, 00:10:27.622 { 00:10:27.622 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:27.622 "subtype": "NVMe", 00:10:27.622 "listen_addresses": [ 00:10:27.622 { 00:10:27.622 "trtype": "VFIOUSER", 00:10:27.622 "adrfam": "IPv4", 00:10:27.622 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:27.622 "trsvcid": "0" 00:10:27.622 } 00:10:27.622 ], 00:10:27.622 "allow_any_host": true, 00:10:27.622 "hosts": [], 00:10:27.622 "serial_number": "SPDK2", 00:10:27.622 "model_number": "SPDK bdev Controller", 00:10:27.622 "max_namespaces": 32, 00:10:27.622 "min_cntlid": 1, 00:10:27.622 "max_cntlid": 65519, 00:10:27.622 "namespaces": [ 00:10:27.622 { 00:10:27.622 "nsid": 1, 00:10:27.622 "bdev_name": "Malloc2", 00:10:27.622 "name": "Malloc2", 00:10:27.622 "nguid": "22F3B60170614A99A76EFA5F80E31564", 00:10:27.622 "uuid": "22f3b601-7061-4a99-a76e-fa5f80e31564" 00:10:27.622 }, 00:10:27.622 { 00:10:27.622 "nsid": 2, 00:10:27.622 "bdev_name": "Malloc4", 00:10:27.622 "name": "Malloc4", 00:10:27.623 "nguid": "B548FED432DB456E8C0C39B0A77F6D93", 00:10:27.623 "uuid": "b548fed4-32db-456e-8c0c-39b0a77f6d93" 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 164115 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 157873 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 157873 ']' 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 157873 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157873 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157873' 00:10:27.623 killing process with pid 157873 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 157873 00:10:27.623 [2024-05-16 20:09:14.736042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:27.623 20:09:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 157873 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=164379 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 164379' 00:10:28.187 Process pid: 164379 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 164379 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 164379 ']' 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:28.187 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:28.187 [2024-05-16 20:09:15.140733] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:28.187 [2024-05-16 20:09:15.141739] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:28.187 [2024-05-16 20:09:15.141795] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.187 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.187 [2024-05-16 20:09:15.199349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.187 [2024-05-16 20:09:15.310475] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.187 [2024-05-16 20:09:15.310538] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.187 [2024-05-16 20:09:15.310566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.187 [2024-05-16 20:09:15.310578] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.187 [2024-05-16 20:09:15.310587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.187 [2024-05-16 20:09:15.310682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.187 [2024-05-16 20:09:15.310748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.187 [2024-05-16 20:09:15.310776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.187 [2024-05-16 20:09:15.310778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.445 [2024-05-16 20:09:15.417747] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:28.445 [2024-05-16 20:09:15.417984] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:28.445 [2024-05-16 20:09:15.418276] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:28.445 [2024-05-16 20:09:15.418895] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:28.445 [2024-05-16 20:09:15.419147] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:28.445 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:28.445 20:09:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:28.445 20:09:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:29.376 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:29.633 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:29.633 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:29.633 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:29.633 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:29.633 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:29.891 Malloc1 00:10:29.891 20:09:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:30.148 20:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:30.404 20:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:30.661 [2024-05-16 20:09:17.663419] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:30.661 20:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:30.661 20:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:30.661 20:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:30.918 Malloc2 00:10:30.918 20:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:31.174 20:09:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:31.431 20:09:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 164379 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 164379 ']' 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 164379 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 164379 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 164379' 00:10:31.687 killing process with pid 164379 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 164379 00:10:31.687 [2024-05-16 20:09:18.714379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:31.687 20:09:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 164379 00:10:31.944 20:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:31.944 20:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:31.944 00:10:31.944 real 0m53.599s 00:10:31.944 user 3m31.376s 00:10:31.944 sys 0m4.377s 00:10:31.944 20:09:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:31.944 20:09:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:31.944 ************************************ 00:10:31.944 END TEST nvmf_vfio_user 00:10:31.944 ************************************ 00:10:31.944 20:09:19 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:31.944 20:09:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:31.944 20:09:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:31.944 20:09:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.944 ************************************ 00:10:31.944 START TEST nvmf_vfio_user_nvme_compliance 00:10:31.944 ************************************ 00:10:31.944 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:32.201 * Looking for test storage... 00:10:32.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=164864 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 164864' 00:10:32.201 Process pid: 164864 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 164864 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 164864 ']' 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:32.201 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:32.201 [2024-05-16 20:09:19.188597] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:32.201 [2024-05-16 20:09:19.188684] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.201 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.201 [2024-05-16 20:09:19.248472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.459 [2024-05-16 20:09:19.364944] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.459 [2024-05-16 20:09:19.364999] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.459 [2024-05-16 20:09:19.365016] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.459 [2024-05-16 20:09:19.365030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.459 [2024-05-16 20:09:19.365042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.459 [2024-05-16 20:09:19.365104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.459 [2024-05-16 20:09:19.365181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.459 [2024-05-16 20:09:19.365185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.459 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:32.459 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:10:32.459 20:09:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 malloc0 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.390 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:33.647 [2024-05-16 20:09:20.554694] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.647 20:09:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:33.647 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.647 00:10:33.647 00:10:33.647 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.647 http://cunit.sourceforge.net/ 00:10:33.647 00:10:33.647 00:10:33.647 Suite: nvme_compliance 00:10:33.647 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-16 20:09:20.713368] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.647 [2024-05-16 20:09:20.714821] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:33.647 [2024-05-16 20:09:20.714866] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:33.647 [2024-05-16 20:09:20.714880] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:33.647 [2024-05-16 20:09:20.716390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.647 passed 00:10:33.904 Test: admin_identify_ctrlr_verify_fused ...[2024-05-16 20:09:20.800994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.904 [2024-05-16 20:09:20.804016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.904 passed 00:10:33.904 Test: admin_identify_ns ...[2024-05-16 20:09:20.890374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.904 [2024-05-16 20:09:20.951886] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:33.904 [2024-05-16 20:09:20.961870] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:33.904 [2024-05-16 20:09:20.982991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.904 passed 00:10:34.161 Test: admin_get_features_mandatory_features ...[2024-05-16 20:09:21.066773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.161 [2024-05-16 20:09:21.069796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.161 passed 00:10:34.161 Test: admin_get_features_optional_features ...[2024-05-16 20:09:21.152367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.161 [2024-05-16 20:09:21.155390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.162 passed 00:10:34.162 Test: admin_set_features_number_of_queues ...[2024-05-16 20:09:21.237533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.419 [2024-05-16 20:09:21.340969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.419 passed 00:10:34.419 Test: admin_get_log_page_mandatory_logs ...[2024-05-16 20:09:21.426295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.419 [2024-05-16 20:09:21.429321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.419 passed 00:10:34.419 Test: admin_get_log_page_with_lpo ...[2024-05-16 20:09:21.511482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.676 [2024-05-16 20:09:21.578867] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:34.676 [2024-05-16 20:09:21.591947] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.676 passed 00:10:34.676 Test: fabric_property_get ...[2024-05-16 20:09:21.675692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.676 [2024-05-16 20:09:21.676974] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:34.676 [2024-05-16 20:09:21.678715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.676 passed 00:10:34.676 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-16 20:09:21.764272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.676 [2024-05-16 20:09:21.765540] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:34.676 [2024-05-16 20:09:21.767298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.676 passed 00:10:34.934 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-16 20:09:21.850604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.934 [2024-05-16 20:09:21.934866] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:34.934 [2024-05-16 20:09:21.950864] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:34.934 [2024-05-16 20:09:21.955981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.934 passed 00:10:34.934 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-16 20:09:22.039796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.934 [2024-05-16 20:09:22.041106] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:34.934 [2024-05-16 20:09:22.042818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:34.934 passed 00:10:35.192 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-16 20:09:22.123985] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.192 [2024-05-16 20:09:22.203868] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:35.192 [2024-05-16 20:09:22.227865] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:35.192 [2024-05-16 20:09:22.232955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.192 passed 00:10:35.192 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-16 20:09:22.313700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.192 [2024-05-16 20:09:22.314985] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:35.192 [2024-05-16 20:09:22.315021] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:35.192 [2024-05-16 20:09:22.316724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.448 passed 00:10:35.449 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-16 20:09:22.402462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.449 [2024-05-16 20:09:22.493867] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:35.449 [2024-05-16 20:09:22.500867] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:35.449 [2024-05-16 20:09:22.509867] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:35.449 [2024-05-16 20:09:22.517865] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:35.449 [2024-05-16 20:09:22.546976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.449 passed 00:10:35.706 Test: admin_create_io_sq_verify_pc ...[2024-05-16 20:09:22.630332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.706 [2024-05-16 20:09:22.646877] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:35.706 [2024-05-16 20:09:22.664634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.706 passed 00:10:35.706 Test: admin_create_io_qp_max_qps ...[2024-05-16 20:09:22.746199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:37.077 [2024-05-16 20:09:23.838883] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:37.334 [2024-05-16 20:09:24.225825] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:37.334 passed 00:10:37.334 Test: admin_create_io_sq_shared_cq ...[2024-05-16 20:09:24.309212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:37.334 [2024-05-16 20:09:24.440863] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:37.334 [2024-05-16 20:09:24.477981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:37.592 passed 00:10:37.592 00:10:37.592 Run Summary: Type Total Ran Passed Failed Inactive 00:10:37.592 suites 1 1 n/a 0 0 00:10:37.592 tests 18 18 18 0 0 00:10:37.592 asserts 360 360 360 0 n/a 00:10:37.592 00:10:37.592 Elapsed time = 1.562 seconds 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 164864 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 164864 ']' 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 164864 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 164864 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 164864' 00:10:37.592 killing process with pid 164864 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 164864 00:10:37.592 [2024-05-16 20:09:24.557375] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:37.592 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 164864 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:37.849 00:10:37.849 real 0m5.748s 00:10:37.849 user 0m16.063s 00:10:37.849 sys 0m0.555s 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:37.849 ************************************ 00:10:37.849 END TEST nvmf_vfio_user_nvme_compliance 00:10:37.849 ************************************ 00:10:37.849 20:09:24 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:37.849 20:09:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:37.849 20:09:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:37.849 20:09:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.849 ************************************ 00:10:37.849 START TEST nvmf_vfio_user_fuzz 00:10:37.849 ************************************ 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:37.849 * Looking for test storage... 00:10:37.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.849 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=165583 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 165583' 00:10:37.850 Process pid: 165583 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 165583 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 165583 ']' 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:37.850 20:09:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:38.414 20:09:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:38.415 20:09:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:10:38.415 20:09:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:39.347 malloc0 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:39.347 20:09:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:11.408 Fuzzing completed. Shutting down the fuzz application 00:11:11.408 00:11:11.408 Dumping successful admin opcodes: 00:11:11.408 8, 9, 10, 24, 00:11:11.408 Dumping successful io opcodes: 00:11:11.408 0, 00:11:11.408 NS: 0x200003a1ef00 I/O qp, Total commands completed: 617950, total successful commands: 2387, random_seed: 3671248320 00:11:11.408 NS: 0x200003a1ef00 admin qp, Total commands completed: 149759, total successful commands: 1203, random_seed: 2136787776 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 165583 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 165583 ']' 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 165583 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 165583 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 165583' 00:11:11.408 killing process with pid 165583 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 165583 00:11:11.408 20:09:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 165583 00:11:11.408 20:09:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:11.408 20:09:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:11.408 00:11:11.408 real 0m32.386s 00:11:11.408 user 0m33.825s 00:11:11.408 sys 0m25.296s 00:11:11.408 20:09:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.408 20:09:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:11.408 ************************************ 00:11:11.408 END TEST nvmf_vfio_user_fuzz 00:11:11.408 ************************************ 00:11:11.408 20:09:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:11.408 20:09:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:11.408 20:09:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.408 20:09:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:11.408 ************************************ 00:11:11.408 START TEST nvmf_host_management 00:11:11.408 ************************************ 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:11.408 * Looking for test storage... 00:11:11.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.408 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.409 20:09:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:12.342 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:12.342 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.342 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:12.343 Found net devices under 0000:09:00.0: cvl_0_0 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:12.343 Found net devices under 0000:09:00.1: cvl_0_1 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:11:12.343 00:11:12.343 --- 10.0.0.2 ping statistics --- 00:11:12.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.343 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:11:12.343 00:11:12.343 --- 10.0.0.1 ping statistics --- 00:11:12.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.343 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=171030 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 171030 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 171030 ']' 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:12.343 20:09:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.343 [2024-05-16 20:09:59.470305] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:12.343 [2024-05-16 20:09:59.470377] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.601 [2024-05-16 20:09:59.539154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.601 [2024-05-16 20:09:59.658899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.601 [2024-05-16 20:09:59.658956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.601 [2024-05-16 20:09:59.658972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.601 [2024-05-16 20:09:59.658985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.601 [2024-05-16 20:09:59.658997] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.601 [2024-05-16 20:09:59.659102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.601 [2024-05-16 20:09:59.659164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.601 [2024-05-16 20:09:59.659222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:12.601 [2024-05-16 20:09:59.659225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 [2024-05-16 20:10:00.459797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 Malloc0 00:11:13.533 [2024-05-16 20:10:00.518362] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:13.533 [2024-05-16 20:10:00.518658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=171203 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 171203 /var/tmp/bdevperf.sock 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 171203 ']' 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:13.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.533 { 00:11:13.533 "params": { 00:11:13.533 "name": "Nvme$subsystem", 00:11:13.533 "trtype": "$TEST_TRANSPORT", 00:11:13.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.533 "adrfam": "ipv4", 00:11:13.533 "trsvcid": "$NVMF_PORT", 00:11:13.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.533 "hdgst": ${hdgst:-false}, 00:11:13.533 "ddgst": ${ddgst:-false} 00:11:13.533 }, 00:11:13.533 "method": "bdev_nvme_attach_controller" 00:11:13.533 } 00:11:13.533 EOF 00:11:13.533 )") 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:13.533 20:10:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.533 "params": { 00:11:13.533 "name": "Nvme0", 00:11:13.533 "trtype": "tcp", 00:11:13.533 "traddr": "10.0.0.2", 00:11:13.533 "adrfam": "ipv4", 00:11:13.533 "trsvcid": "4420", 00:11:13.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:13.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:13.533 "hdgst": false, 00:11:13.533 "ddgst": false 00:11:13.533 }, 00:11:13.533 "method": "bdev_nvme_attach_controller" 00:11:13.533 }' 00:11:13.533 [2024-05-16 20:10:00.588744] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:13.533 [2024-05-16 20:10:00.588819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171203 ] 00:11:13.533 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.533 [2024-05-16 20:10:00.651223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.791 [2024-05-16 20:10:00.763969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.048 Running I/O for 10 seconds... 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:14.048 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.308 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.308 [2024-05-16 20:10:01.393749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.393816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.393845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.393885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.393909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.393924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.393949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.393964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.393978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.393993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.308 [2024-05-16 20:10:01.394381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.308 [2024-05-16 20:10:01.394394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.394972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.394986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.309 [2024-05-16 20:10:01.395446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.309 [2024-05-16 20:10:01.395460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.310 [2024-05-16 20:10:01.395714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.395801] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f4590 was disconnected and freed. reset controller. 00:11:14.310 [2024-05-16 20:10:01.396953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:14.310 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.310 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:14.310 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.310 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.310 task offset: 85248 on job bdev=Nvme0n1 fails 00:11:14.310 00:11:14.310 Latency(us) 00:11:14.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.310 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:14.310 Job: Nvme0n1 ended in about 0.40 seconds with error 00:11:14.310 Verification LBA range: start 0x0 length 0x400 00:11:14.310 Nvme0n1 : 0.40 1593.50 99.59 159.35 0.00 35454.28 2585.03 34369.99 00:11:14.310 =================================================================================================================== 00:11:14.310 Total : 1593.50 99.59 159.35 0.00 35454.28 2585.03 34369.99 00:11:14.310 [2024-05-16 20:10:01.398864] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:14.310 [2024-05-16 20:10:01.398905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3700 (9): Bad file descriptor 00:11:14.310 [2024-05-16 20:10:01.401180] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:14.310 [2024-05-16 20:10:01.401289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:14.310 [2024-05-16 20:10:01.401318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.310 [2024-05-16 20:10:01.401340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:14.310 [2024-05-16 20:10:01.401355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:14.310 [2024-05-16 20:10:01.401369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:14.310 [2024-05-16 20:10:01.401382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11e3700 00:11:14.310 [2024-05-16 20:10:01.401414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3700 (9): Bad file descriptor 00:11:14.310 [2024-05-16 20:10:01.401439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:11:14.310 [2024-05-16 20:10:01.401462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:11:14.310 [2024-05-16 20:10:01.401478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:11:14.310 [2024-05-16 20:10:01.401499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:11:14.310 20:10:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.310 20:10:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 171203 00:11:15.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (171203) - No such process 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:15.682 { 00:11:15.682 "params": { 00:11:15.682 "name": "Nvme$subsystem", 00:11:15.682 "trtype": "$TEST_TRANSPORT", 00:11:15.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.682 "adrfam": "ipv4", 00:11:15.682 "trsvcid": "$NVMF_PORT", 00:11:15.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.682 "hdgst": ${hdgst:-false}, 00:11:15.682 "ddgst": ${ddgst:-false} 00:11:15.682 }, 00:11:15.682 "method": "bdev_nvme_attach_controller" 00:11:15.682 } 00:11:15.682 EOF 00:11:15.682 )") 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:15.682 20:10:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:15.682 "params": { 00:11:15.682 "name": "Nvme0", 00:11:15.682 "trtype": "tcp", 00:11:15.682 "traddr": "10.0.0.2", 00:11:15.682 "adrfam": "ipv4", 00:11:15.682 "trsvcid": "4420", 00:11:15.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:15.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:15.682 "hdgst": false, 00:11:15.682 "ddgst": false 00:11:15.682 }, 00:11:15.682 "method": "bdev_nvme_attach_controller" 00:11:15.682 }' 00:11:15.682 [2024-05-16 20:10:02.455434] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:15.682 [2024-05-16 20:10:02.455510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171366 ] 00:11:15.682 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.682 [2024-05-16 20:10:02.519215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.682 [2024-05-16 20:10:02.631104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.940 Running I/O for 1 seconds... 00:11:16.873 00:11:16.873 Latency(us) 00:11:16.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.873 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:16.873 Verification LBA range: start 0x0 length 0x400 00:11:16.873 Nvme0n1 : 1.02 1690.85 105.68 0.00 0.00 37239.82 4490.43 32816.55 00:11:16.873 =================================================================================================================== 00:11:16.873 Total : 1690.85 105.68 0.00 0.00 37239.82 4490.43 32816.55 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.130 rmmod nvme_tcp 00:11:17.130 rmmod nvme_fabrics 00:11:17.130 rmmod nvme_keyring 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 171030 ']' 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 171030 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 171030 ']' 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 171030 00:11:17.130 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 171030 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 171030' 00:11:17.388 killing process with pid 171030 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 171030 00:11:17.388 [2024-05-16 20:10:04.305042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:17.388 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 171030 00:11:17.646 [2024-05-16 20:10:04.578407] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.646 20:10:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.547 20:10:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.547 20:10:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:19.547 00:11:19.547 real 0m9.319s 00:11:19.547 user 0m23.002s 00:11:19.547 sys 0m2.539s 00:11:19.547 20:10:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:19.547 20:10:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.547 ************************************ 00:11:19.547 END TEST nvmf_host_management 00:11:19.547 ************************************ 00:11:19.547 20:10:06 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:19.547 20:10:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:19.547 20:10:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.547 20:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.805 ************************************ 00:11:19.805 START TEST nvmf_lvol 00:11:19.805 ************************************ 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:19.805 * Looking for test storage... 00:11:19.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.805 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.806 20:10:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:21.705 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:21.705 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:21.705 Found net devices under 0000:09:00.0: cvl_0_0 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:21.705 Found net devices under 0000:09:00.1: cvl_0_1 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.705 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:11:21.706 00:11:21.706 --- 10.0.0.2 ping statistics --- 00:11:21.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.706 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:21.706 00:11:21.706 --- 10.0.0.1 ping statistics --- 00:11:21.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.706 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.706 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=173571 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 173571 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 173571 ']' 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:21.964 20:10:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:21.964 [2024-05-16 20:10:08.901792] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:21.964 [2024-05-16 20:10:08.901882] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.964 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.964 [2024-05-16 20:10:08.964350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.964 [2024-05-16 20:10:09.069974] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.964 [2024-05-16 20:10:09.070036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.964 [2024-05-16 20:10:09.070050] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.964 [2024-05-16 20:10:09.070061] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.964 [2024-05-16 20:10:09.070071] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.964 [2024-05-16 20:10:09.070151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.964 [2024-05-16 20:10:09.070181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.964 [2024-05-16 20:10:09.070183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.221 20:10:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:22.478 [2024-05-16 20:10:09.485507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.478 20:10:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.736 20:10:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:22.736 20:10:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.994 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:22.994 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:23.252 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:23.817 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=60136c12-d982-42d8-91f3-c133215bbb7c 00:11:23.817 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 60136c12-d982-42d8-91f3-c133215bbb7c lvol 20 00:11:24.074 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=53737a4c-abb9-4e47-bba8-0333c51c691d 00:11:24.074 20:10:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:24.336 20:10:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53737a4c-abb9-4e47-bba8-0333c51c691d 00:11:24.594 20:10:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:24.851 [2024-05-16 20:10:11.740844] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:24.851 [2024-05-16 20:10:11.741211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.851 20:10:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.108 20:10:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=173994 00:11:25.108 20:10:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:25.108 20:10:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:25.108 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.043 20:10:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 53737a4c-abb9-4e47-bba8-0333c51c691d MY_SNAPSHOT 00:11:26.301 20:10:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7715486b-0638-4b48-b156-3b5c96155644 00:11:26.301 20:10:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 53737a4c-abb9-4e47-bba8-0333c51c691d 30 00:11:26.570 20:10:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7715486b-0638-4b48-b156-3b5c96155644 MY_CLONE 00:11:26.833 20:10:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f84ceca3-647c-4cf2-ae3d-7ed48c9c1f4c 00:11:26.833 20:10:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f84ceca3-647c-4cf2-ae3d-7ed48c9c1f4c 00:11:27.768 20:10:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 173994 00:11:35.874 Initializing NVMe Controllers 00:11:35.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:35.874 Controller IO queue size 128, less than required. 00:11:35.874 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:35.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:35.874 Initialization complete. Launching workers. 00:11:35.874 ======================================================== 00:11:35.874 Latency(us) 00:11:35.874 Device Information : IOPS MiB/s Average min max 00:11:35.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10544.20 41.19 12139.11 2373.48 68319.90 00:11:35.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10385.00 40.57 12329.52 2471.99 79680.29 00:11:35.874 ======================================================== 00:11:35.874 Total : 20929.20 81.75 12233.59 2373.48 79680.29 00:11:35.874 00:11:35.874 20:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:35.874 20:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 53737a4c-abb9-4e47-bba8-0333c51c691d 00:11:36.131 20:10:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60136c12-d982-42d8-91f3-c133215bbb7c 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.389 rmmod nvme_tcp 00:11:36.389 rmmod nvme_fabrics 00:11:36.389 rmmod nvme_keyring 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 173571 ']' 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 173571 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 173571 ']' 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 173571 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 173571 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 173571' 00:11:36.389 killing process with pid 173571 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 173571 00:11:36.389 [2024-05-16 20:10:23.482265] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:36.389 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 173571 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.962 20:10:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:38.860 00:11:38.860 real 0m19.145s 00:11:38.860 user 1m5.804s 00:11:38.860 sys 0m5.540s 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:38.860 ************************************ 00:11:38.860 END TEST nvmf_lvol 00:11:38.860 ************************************ 00:11:38.860 20:10:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:38.860 20:10:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:38.860 20:10:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.860 20:10:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.860 ************************************ 00:11:38.860 START TEST nvmf_lvs_grow 00:11:38.860 ************************************ 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:38.860 * Looking for test storage... 00:11:38.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.860 20:10:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.861 20:10:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:41.391 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:41.391 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:41.391 Found net devices under 0000:09:00.0: cvl_0_0 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.391 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:41.392 Found net devices under 0000:09:00.1: cvl_0_1 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.392 20:10:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:11:41.392 00:11:41.392 --- 10.0.0.2 ping statistics --- 00:11:41.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.392 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:11:41.392 00:11:41.392 --- 10.0.0.1 ping statistics --- 00:11:41.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.392 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=177258 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 177258 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 177258 ']' 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 [2024-05-16 20:10:28.155359] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:41.392 [2024-05-16 20:10:28.155447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.392 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.392 [2024-05-16 20:10:28.221747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.392 [2024-05-16 20:10:28.333985] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.392 [2024-05-16 20:10:28.334039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.392 [2024-05-16 20:10:28.334070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.392 [2024-05-16 20:10:28.334082] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.392 [2024-05-16 20:10:28.334092] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.392 [2024-05-16 20:10:28.334119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.392 20:10:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:41.649 [2024-05-16 20:10:28.744939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.649 20:10:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:41.649 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:41.649 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.649 20:10:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.907 ************************************ 00:11:41.907 START TEST lvs_grow_clean 00:11:41.907 ************************************ 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:41.907 20:10:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:42.165 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:42.165 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:42.424 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:42.424 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:42.424 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:42.681 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:42.681 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:42.681 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a lvol 150 00:11:42.939 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a0058290-1d53-4317-af12-c9d6da3b63d8 00:11:42.939 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:42.939 20:10:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:43.196 [2024-05-16 20:10:30.091034] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:43.196 [2024-05-16 20:10:30.091136] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:43.196 true 00:11:43.196 20:10:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:43.196 20:10:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:43.454 20:10:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:43.454 20:10:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:43.711 20:10:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0058290-1d53-4317-af12-c9d6da3b63d8 00:11:43.968 20:10:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:43.968 [2024-05-16 20:10:31.093914] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:43.968 [2024-05-16 20:10:31.094235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.968 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=177696 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 177696 /var/tmp/bdevperf.sock 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 177696 ']' 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:44.226 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:44.483 [2024-05-16 20:10:31.389867] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:44.483 [2024-05-16 20:10:31.389948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177696 ] 00:11:44.483 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.483 [2024-05-16 20:10:31.450970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.483 [2024-05-16 20:10:31.567955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.741 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:44.741 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:11:44.741 20:10:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:44.999 Nvme0n1 00:11:45.256 20:10:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:45.514 [ 00:11:45.514 { 00:11:45.514 "name": "Nvme0n1", 00:11:45.514 "aliases": [ 00:11:45.514 "a0058290-1d53-4317-af12-c9d6da3b63d8" 00:11:45.514 ], 00:11:45.514 "product_name": "NVMe disk", 00:11:45.514 "block_size": 4096, 00:11:45.514 "num_blocks": 38912, 00:11:45.514 "uuid": "a0058290-1d53-4317-af12-c9d6da3b63d8", 00:11:45.514 "assigned_rate_limits": { 00:11:45.514 "rw_ios_per_sec": 0, 00:11:45.514 "rw_mbytes_per_sec": 0, 00:11:45.514 "r_mbytes_per_sec": 0, 00:11:45.514 "w_mbytes_per_sec": 0 00:11:45.514 }, 00:11:45.514 "claimed": false, 00:11:45.514 "zoned": false, 00:11:45.514 "supported_io_types": { 00:11:45.514 "read": true, 00:11:45.514 "write": true, 00:11:45.514 "unmap": true, 00:11:45.514 "write_zeroes": true, 00:11:45.514 "flush": true, 00:11:45.514 "reset": true, 00:11:45.514 "compare": true, 00:11:45.514 "compare_and_write": true, 00:11:45.514 "abort": true, 00:11:45.514 "nvme_admin": true, 00:11:45.514 "nvme_io": true 00:11:45.514 }, 00:11:45.514 "memory_domains": [ 00:11:45.514 { 00:11:45.514 "dma_device_id": "system", 00:11:45.514 "dma_device_type": 1 00:11:45.514 } 00:11:45.514 ], 00:11:45.514 "driver_specific": { 00:11:45.514 "nvme": [ 00:11:45.514 { 00:11:45.514 "trid": { 00:11:45.514 "trtype": "TCP", 00:11:45.514 "adrfam": "IPv4", 00:11:45.514 "traddr": "10.0.0.2", 00:11:45.514 "trsvcid": "4420", 00:11:45.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:45.514 }, 00:11:45.514 "ctrlr_data": { 00:11:45.514 "cntlid": 1, 00:11:45.514 "vendor_id": "0x8086", 00:11:45.514 "model_number": "SPDK bdev Controller", 00:11:45.514 "serial_number": "SPDK0", 00:11:45.514 "firmware_revision": "24.09", 00:11:45.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:45.514 "oacs": { 00:11:45.514 "security": 0, 00:11:45.514 "format": 0, 00:11:45.514 "firmware": 0, 00:11:45.514 "ns_manage": 0 00:11:45.514 }, 00:11:45.514 "multi_ctrlr": true, 00:11:45.514 "ana_reporting": false 00:11:45.514 }, 00:11:45.514 "vs": { 00:11:45.514 "nvme_version": "1.3" 00:11:45.514 }, 00:11:45.514 "ns_data": { 00:11:45.514 "id": 1, 00:11:45.514 "can_share": true 00:11:45.514 } 00:11:45.514 } 00:11:45.514 ], 00:11:45.514 "mp_policy": "active_passive" 00:11:45.514 } 00:11:45.514 } 00:11:45.514 ] 00:11:45.514 20:10:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=177830 00:11:45.514 20:10:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:45.514 20:10:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.514 Running I/O for 10 seconds... 00:11:46.446 Latency(us) 00:11:46.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.446 Nvme0n1 : 1.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:11:46.446 =================================================================================================================== 00:11:46.446 Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:11:46.446 00:11:47.382 20:10:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:47.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.640 Nvme0n1 : 2.00 14478.50 56.56 0.00 0.00 0.00 0.00 0.00 00:11:47.640 =================================================================================================================== 00:11:47.640 Total : 14478.50 56.56 0.00 0.00 0.00 0.00 0.00 00:11:47.640 00:11:47.640 true 00:11:47.640 20:10:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:47.640 20:10:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:47.899 20:10:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:47.899 20:10:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:47.899 20:10:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 177830 00:11:48.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.466 Nvme0n1 : 3.00 14478.33 56.56 0.00 0.00 0.00 0.00 0.00 00:11:48.466 =================================================================================================================== 00:11:48.466 Total : 14478.33 56.56 0.00 0.00 0.00 0.00 0.00 00:11:48.466 00:11:49.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.839 Nvme0n1 : 4.00 14510.00 56.68 0.00 0.00 0.00 0.00 0.00 00:11:49.839 =================================================================================================================== 00:11:49.839 Total : 14510.00 56.68 0.00 0.00 0.00 0.00 0.00 00:11:49.839 00:11:50.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.772 Nvme0n1 : 5.00 14656.00 57.25 0.00 0.00 0.00 0.00 0.00 00:11:50.772 =================================================================================================================== 00:11:50.772 Total : 14656.00 57.25 0.00 0.00 0.00 0.00 0.00 00:11:50.772 00:11:51.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.705 Nvme0n1 : 6.00 14774.50 57.71 0.00 0.00 0.00 0.00 0.00 00:11:51.705 =================================================================================================================== 00:11:51.705 Total : 14774.50 57.71 0.00 0.00 0.00 0.00 0.00 00:11:51.705 00:11:52.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.639 Nvme0n1 : 7.00 14786.57 57.76 0.00 0.00 0.00 0.00 0.00 00:11:52.639 =================================================================================================================== 00:11:52.639 Total : 14786.57 57.76 0.00 0.00 0.00 0.00 0.00 00:11:52.639 00:11:53.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.573 Nvme0n1 : 8.00 14763.88 57.67 0.00 0.00 0.00 0.00 0.00 00:11:53.573 =================================================================================================================== 00:11:53.573 Total : 14763.88 57.67 0.00 0.00 0.00 0.00 0.00 00:11:53.573 00:11:54.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.507 Nvme0n1 : 9.00 14768.11 57.69 0.00 0.00 0.00 0.00 0.00 00:11:54.507 =================================================================================================================== 00:11:54.507 Total : 14768.11 57.69 0.00 0.00 0.00 0.00 0.00 00:11:54.507 00:11:55.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.442 Nvme0n1 : 10.00 14770.90 57.70 0.00 0.00 0.00 0.00 0.00 00:11:55.442 =================================================================================================================== 00:11:55.442 Total : 14770.90 57.70 0.00 0.00 0.00 0.00 0.00 00:11:55.442 00:11:55.701 00:11:55.701 Latency(us) 00:11:55.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.701 Nvme0n1 : 10.01 14773.44 57.71 0.00 0.00 8658.73 3094.76 16408.27 00:11:55.701 =================================================================================================================== 00:11:55.701 Total : 14773.44 57.71 0.00 0.00 8658.73 3094.76 16408.27 00:11:55.701 0 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 177696 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 177696 ']' 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 177696 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177696 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177696' 00:11:55.701 killing process with pid 177696 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 177696 00:11:55.701 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.701 00:11:55.701 Latency(us) 00:11:55.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.701 =================================================================================================================== 00:11:55.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:55.701 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 177696 00:11:55.959 20:10:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.216 20:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:56.474 20:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:56.474 20:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:56.731 20:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:56.731 20:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:56.731 20:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:56.988 [2024-05-16 20:10:43.985677] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:56.988 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:56.988 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:56.988 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:56.989 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:57.246 request: 00:11:57.246 { 00:11:57.246 "uuid": "23ae4649-0ef6-4b40-96db-b9d7dc4abf6a", 00:11:57.246 "method": "bdev_lvol_get_lvstores", 00:11:57.246 "req_id": 1 00:11:57.246 } 00:11:57.246 Got JSON-RPC error response 00:11:57.246 response: 00:11:57.246 { 00:11:57.246 "code": -19, 00:11:57.246 "message": "No such device" 00:11:57.246 } 00:11:57.246 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:57.246 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.246 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.246 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.246 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:57.504 aio_bdev 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a0058290-1d53-4317-af12-c9d6da3b63d8 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=a0058290-1d53-4317-af12-c9d6da3b63d8 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:57.504 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:57.761 20:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a0058290-1d53-4317-af12-c9d6da3b63d8 -t 2000 00:11:58.018 [ 00:11:58.018 { 00:11:58.018 "name": "a0058290-1d53-4317-af12-c9d6da3b63d8", 00:11:58.018 "aliases": [ 00:11:58.018 "lvs/lvol" 00:11:58.018 ], 00:11:58.018 "product_name": "Logical Volume", 00:11:58.018 "block_size": 4096, 00:11:58.018 "num_blocks": 38912, 00:11:58.018 "uuid": "a0058290-1d53-4317-af12-c9d6da3b63d8", 00:11:58.018 "assigned_rate_limits": { 00:11:58.018 "rw_ios_per_sec": 0, 00:11:58.018 "rw_mbytes_per_sec": 0, 00:11:58.018 "r_mbytes_per_sec": 0, 00:11:58.018 "w_mbytes_per_sec": 0 00:11:58.018 }, 00:11:58.018 "claimed": false, 00:11:58.018 "zoned": false, 00:11:58.018 "supported_io_types": { 00:11:58.018 "read": true, 00:11:58.018 "write": true, 00:11:58.018 "unmap": true, 00:11:58.018 "write_zeroes": true, 00:11:58.018 "flush": false, 00:11:58.018 "reset": true, 00:11:58.018 "compare": false, 00:11:58.018 "compare_and_write": false, 00:11:58.018 "abort": false, 00:11:58.018 "nvme_admin": false, 00:11:58.018 "nvme_io": false 00:11:58.018 }, 00:11:58.018 "driver_specific": { 00:11:58.018 "lvol": { 00:11:58.018 "lvol_store_uuid": "23ae4649-0ef6-4b40-96db-b9d7dc4abf6a", 00:11:58.018 "base_bdev": "aio_bdev", 00:11:58.018 "thin_provision": false, 00:11:58.018 "num_allocated_clusters": 38, 00:11:58.018 "snapshot": false, 00:11:58.018 "clone": false, 00:11:58.018 "esnap_clone": false 00:11:58.018 } 00:11:58.018 } 00:11:58.018 } 00:11:58.018 ] 00:11:58.018 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:11:58.018 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:58.018 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:58.275 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:58.275 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:58.275 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:58.533 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:58.533 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0058290-1d53-4317-af12-c9d6da3b63d8 00:11:58.791 20:10:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23ae4649-0ef6-4b40-96db-b9d7dc4abf6a 00:11:59.048 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:59.306 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.564 00:11:59.564 real 0m17.658s 00:11:59.564 user 0m17.154s 00:11:59.564 sys 0m1.846s 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:59.564 ************************************ 00:11:59.564 END TEST lvs_grow_clean 00:11:59.564 ************************************ 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:59.564 ************************************ 00:11:59.564 START TEST lvs_grow_dirty 00:11:59.564 ************************************ 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.564 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:59.822 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:59.822 20:10:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:00.080 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:00.080 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:00.080 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:00.337 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:00.337 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:00.337 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 lvol 150 00:12:00.595 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=36552e59-8a70-48b8-bb3f-240a099bc356 00:12:00.595 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:00.595 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:00.853 [2024-05-16 20:10:47.771153] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:00.853 [2024-05-16 20:10:47.771289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:00.853 true 00:12:00.853 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:00.853 20:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:01.111 20:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:01.111 20:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:01.368 20:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36552e59-8a70-48b8-bb3f-240a099bc356 00:12:01.626 20:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:01.885 [2024-05-16 20:10:48.866466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.885 20:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=179869 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 179869 /var/tmp/bdevperf.sock 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 179869 ']' 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:02.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.143 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:02.143 [2024-05-16 20:10:49.169504] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:02.143 [2024-05-16 20:10:49.169590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179869 ] 00:12:02.143 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.143 [2024-05-16 20:10:49.231951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.400 [2024-05-16 20:10:49.349330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.400 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:02.400 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:02.400 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:02.966 Nvme0n1 00:12:02.966 20:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:02.966 [ 00:12:02.966 { 00:12:02.966 "name": "Nvme0n1", 00:12:02.966 "aliases": [ 00:12:02.966 "36552e59-8a70-48b8-bb3f-240a099bc356" 00:12:02.966 ], 00:12:02.966 "product_name": "NVMe disk", 00:12:02.966 "block_size": 4096, 00:12:02.966 "num_blocks": 38912, 00:12:02.966 "uuid": "36552e59-8a70-48b8-bb3f-240a099bc356", 00:12:02.966 "assigned_rate_limits": { 00:12:02.966 "rw_ios_per_sec": 0, 00:12:02.966 "rw_mbytes_per_sec": 0, 00:12:02.966 "r_mbytes_per_sec": 0, 00:12:02.966 "w_mbytes_per_sec": 0 00:12:02.966 }, 00:12:02.966 "claimed": false, 00:12:02.966 "zoned": false, 00:12:02.966 "supported_io_types": { 00:12:02.966 "read": true, 00:12:02.966 "write": true, 00:12:02.966 "unmap": true, 00:12:02.966 "write_zeroes": true, 00:12:02.966 "flush": true, 00:12:02.966 "reset": true, 00:12:02.966 "compare": true, 00:12:02.966 "compare_and_write": true, 00:12:02.966 "abort": true, 00:12:02.966 "nvme_admin": true, 00:12:02.966 "nvme_io": true 00:12:02.966 }, 00:12:02.966 "memory_domains": [ 00:12:02.966 { 00:12:02.966 "dma_device_id": "system", 00:12:02.966 "dma_device_type": 1 00:12:02.966 } 00:12:02.966 ], 00:12:02.966 "driver_specific": { 00:12:02.966 "nvme": [ 00:12:02.966 { 00:12:02.966 "trid": { 00:12:02.966 "trtype": "TCP", 00:12:02.966 "adrfam": "IPv4", 00:12:02.966 "traddr": "10.0.0.2", 00:12:02.966 "trsvcid": "4420", 00:12:02.966 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:02.966 }, 00:12:02.966 "ctrlr_data": { 00:12:02.966 "cntlid": 1, 00:12:02.966 "vendor_id": "0x8086", 00:12:02.966 "model_number": "SPDK bdev Controller", 00:12:02.966 "serial_number": "SPDK0", 00:12:02.966 "firmware_revision": "24.09", 00:12:02.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:02.966 "oacs": { 00:12:02.966 "security": 0, 00:12:02.966 "format": 0, 00:12:02.966 "firmware": 0, 00:12:02.966 "ns_manage": 0 00:12:02.966 }, 00:12:02.966 "multi_ctrlr": true, 00:12:02.966 "ana_reporting": false 00:12:02.966 }, 00:12:02.966 "vs": { 00:12:02.966 "nvme_version": "1.3" 00:12:02.966 }, 00:12:02.966 "ns_data": { 00:12:02.966 "id": 1, 00:12:02.966 "can_share": true 00:12:02.966 } 00:12:02.966 } 00:12:02.966 ], 00:12:02.966 "mp_policy": "active_passive" 00:12:02.966 } 00:12:02.966 } 00:12:02.966 ] 00:12:03.224 20:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=180007 00:12:03.224 20:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:03.224 20:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:03.224 Running I/O for 10 seconds... 00:12:04.156 Latency(us) 00:12:04.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.156 Nvme0n1 : 1.00 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:12:04.156 =================================================================================================================== 00:12:04.156 Total : 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:12:04.156 00:12:05.088 20:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:05.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.088 Nvme0n1 : 2.00 14543.50 56.81 0.00 0.00 0.00 0.00 0.00 00:12:05.088 =================================================================================================================== 00:12:05.088 Total : 14543.50 56.81 0.00 0.00 0.00 0.00 0.00 00:12:05.088 00:12:05.345 true 00:12:05.345 20:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:05.345 20:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:05.603 20:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:05.603 20:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:05.603 20:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 180007 00:12:06.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.169 Nvme0n1 : 3.00 14733.33 57.55 0.00 0.00 0.00 0.00 0.00 00:12:06.169 =================================================================================================================== 00:12:06.169 Total : 14733.33 57.55 0.00 0.00 0.00 0.00 0.00 00:12:06.169 00:12:07.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.102 Nvme0n1 : 4.00 14764.75 57.67 0.00 0.00 0.00 0.00 0.00 00:12:07.102 =================================================================================================================== 00:12:07.102 Total : 14764.75 57.67 0.00 0.00 0.00 0.00 0.00 00:12:07.102 00:12:08.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.476 Nvme0n1 : 5.00 14834.40 57.95 0.00 0.00 0.00 0.00 0.00 00:12:08.476 =================================================================================================================== 00:12:08.476 Total : 14834.40 57.95 0.00 0.00 0.00 0.00 0.00 00:12:08.476 00:12:09.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.408 Nvme0n1 : 6.00 14838.50 57.96 0.00 0.00 0.00 0.00 0.00 00:12:09.408 =================================================================================================================== 00:12:09.408 Total : 14838.50 57.96 0.00 0.00 0.00 0.00 0.00 00:12:09.408 00:12:10.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.341 Nvme0n1 : 7.00 14841.43 57.97 0.00 0.00 0.00 0.00 0.00 00:12:10.341 =================================================================================================================== 00:12:10.341 Total : 14841.43 57.97 0.00 0.00 0.00 0.00 0.00 00:12:10.341 00:12:11.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.274 Nvme0n1 : 8.00 14851.75 58.01 0.00 0.00 0.00 0.00 0.00 00:12:11.274 =================================================================================================================== 00:12:11.274 Total : 14851.75 58.01 0.00 0.00 0.00 0.00 0.00 00:12:11.274 00:12:12.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.208 Nvme0n1 : 9.00 14916.22 58.27 0.00 0.00 0.00 0.00 0.00 00:12:12.208 =================================================================================================================== 00:12:12.208 Total : 14916.22 58.27 0.00 0.00 0.00 0.00 0.00 00:12:12.208 00:12:13.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.142 Nvme0n1 : 10.00 14948.60 58.39 0.00 0.00 0.00 0.00 0.00 00:12:13.142 =================================================================================================================== 00:12:13.142 Total : 14948.60 58.39 0.00 0.00 0.00 0.00 0.00 00:12:13.142 00:12:13.142 00:12:13.142 Latency(us) 00:12:13.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.142 Nvme0n1 : 10.00 14955.14 58.42 0.00 0.00 8553.86 2912.71 16602.45 00:12:13.142 =================================================================================================================== 00:12:13.142 Total : 14955.14 58.42 0.00 0.00 8553.86 2912.71 16602.45 00:12:13.142 0 00:12:13.142 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 179869 00:12:13.142 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 179869 ']' 00:12:13.142 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 179869 00:12:13.142 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:12:13.142 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:13.142 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 179869 00:12:13.399 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:13.399 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:13.399 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 179869' 00:12:13.399 killing process with pid 179869 00:12:13.399 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 179869 00:12:13.399 Received shutdown signal, test time was about 10.000000 seconds 00:12:13.399 00:12:13.399 Latency(us) 00:12:13.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.399 =================================================================================================================== 00:12:13.399 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:13.399 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 179869 00:12:13.656 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:13.913 20:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:14.170 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:14.170 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 177258 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 177258 00:12:14.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 177258 Killed "${NVMF_APP[@]}" "$@" 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=181339 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 181339 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 181339 ']' 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:14.428 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:14.428 [2024-05-16 20:11:01.495480] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:14.428 [2024-05-16 20:11:01.495562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.428 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.428 [2024-05-16 20:11:01.559230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.686 [2024-05-16 20:11:01.665248] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.686 [2024-05-16 20:11:01.665300] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.686 [2024-05-16 20:11:01.665328] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.686 [2024-05-16 20:11:01.665339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.686 [2024-05-16 20:11:01.665349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.686 [2024-05-16 20:11:01.665389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.686 20:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.944 [2024-05-16 20:11:02.045304] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:14.944 [2024-05-16 20:11:02.045450] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:14.944 [2024-05-16 20:11:02.045506] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 36552e59-8a70-48b8-bb3f-240a099bc356 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=36552e59-8a70-48b8-bb3f-240a099bc356 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:14.944 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:15.202 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 36552e59-8a70-48b8-bb3f-240a099bc356 -t 2000 00:12:15.460 [ 00:12:15.460 { 00:12:15.460 "name": "36552e59-8a70-48b8-bb3f-240a099bc356", 00:12:15.460 "aliases": [ 00:12:15.460 "lvs/lvol" 00:12:15.460 ], 00:12:15.460 "product_name": "Logical Volume", 00:12:15.460 "block_size": 4096, 00:12:15.460 "num_blocks": 38912, 00:12:15.460 "uuid": "36552e59-8a70-48b8-bb3f-240a099bc356", 00:12:15.460 "assigned_rate_limits": { 00:12:15.460 "rw_ios_per_sec": 0, 00:12:15.460 "rw_mbytes_per_sec": 0, 00:12:15.460 "r_mbytes_per_sec": 0, 00:12:15.460 "w_mbytes_per_sec": 0 00:12:15.460 }, 00:12:15.460 "claimed": false, 00:12:15.460 "zoned": false, 00:12:15.460 "supported_io_types": { 00:12:15.460 "read": true, 00:12:15.460 "write": true, 00:12:15.460 "unmap": true, 00:12:15.460 "write_zeroes": true, 00:12:15.460 "flush": false, 00:12:15.460 "reset": true, 00:12:15.460 "compare": false, 00:12:15.460 "compare_and_write": false, 00:12:15.460 "abort": false, 00:12:15.460 "nvme_admin": false, 00:12:15.460 "nvme_io": false 00:12:15.460 }, 00:12:15.460 "driver_specific": { 00:12:15.460 "lvol": { 00:12:15.460 "lvol_store_uuid": "7d6e59bc-3e84-4946-8f8a-a4a9e8898654", 00:12:15.460 "base_bdev": "aio_bdev", 00:12:15.460 "thin_provision": false, 00:12:15.460 "num_allocated_clusters": 38, 00:12:15.460 "snapshot": false, 00:12:15.460 "clone": false, 00:12:15.460 "esnap_clone": false 00:12:15.460 } 00:12:15.460 } 00:12:15.460 } 00:12:15.460 ] 00:12:15.460 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:15.460 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:15.460 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:15.717 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:15.717 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:15.717 20:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:15.974 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:15.974 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.232 [2024-05-16 20:11:03.246121] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:16.232 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:16.499 request: 00:12:16.499 { 00:12:16.499 "uuid": "7d6e59bc-3e84-4946-8f8a-a4a9e8898654", 00:12:16.499 "method": "bdev_lvol_get_lvstores", 00:12:16.499 "req_id": 1 00:12:16.499 } 00:12:16.499 Got JSON-RPC error response 00:12:16.499 response: 00:12:16.499 { 00:12:16.499 "code": -19, 00:12:16.499 "message": "No such device" 00:12:16.499 } 00:12:16.499 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:16.499 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.499 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.499 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.499 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:16.757 aio_bdev 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 36552e59-8a70-48b8-bb3f-240a099bc356 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=36552e59-8a70-48b8-bb3f-240a099bc356 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:16.757 20:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:17.014 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 36552e59-8a70-48b8-bb3f-240a099bc356 -t 2000 00:12:17.272 [ 00:12:17.272 { 00:12:17.272 "name": "36552e59-8a70-48b8-bb3f-240a099bc356", 00:12:17.272 "aliases": [ 00:12:17.272 "lvs/lvol" 00:12:17.272 ], 00:12:17.273 "product_name": "Logical Volume", 00:12:17.273 "block_size": 4096, 00:12:17.273 "num_blocks": 38912, 00:12:17.273 "uuid": "36552e59-8a70-48b8-bb3f-240a099bc356", 00:12:17.273 "assigned_rate_limits": { 00:12:17.273 "rw_ios_per_sec": 0, 00:12:17.273 "rw_mbytes_per_sec": 0, 00:12:17.273 "r_mbytes_per_sec": 0, 00:12:17.273 "w_mbytes_per_sec": 0 00:12:17.273 }, 00:12:17.273 "claimed": false, 00:12:17.273 "zoned": false, 00:12:17.273 "supported_io_types": { 00:12:17.273 "read": true, 00:12:17.273 "write": true, 00:12:17.273 "unmap": true, 00:12:17.273 "write_zeroes": true, 00:12:17.273 "flush": false, 00:12:17.273 "reset": true, 00:12:17.273 "compare": false, 00:12:17.273 "compare_and_write": false, 00:12:17.273 "abort": false, 00:12:17.273 "nvme_admin": false, 00:12:17.273 "nvme_io": false 00:12:17.273 }, 00:12:17.273 "driver_specific": { 00:12:17.273 "lvol": { 00:12:17.273 "lvol_store_uuid": "7d6e59bc-3e84-4946-8f8a-a4a9e8898654", 00:12:17.273 "base_bdev": "aio_bdev", 00:12:17.273 "thin_provision": false, 00:12:17.273 "num_allocated_clusters": 38, 00:12:17.273 "snapshot": false, 00:12:17.273 "clone": false, 00:12:17.273 "esnap_clone": false 00:12:17.273 } 00:12:17.273 } 00:12:17.273 } 00:12:17.273 ] 00:12:17.273 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:17.273 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:17.273 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:17.531 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:17.531 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:17.531 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:17.789 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:17.789 20:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36552e59-8a70-48b8-bb3f-240a099bc356 00:12:18.046 20:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d6e59bc-3e84-4946-8f8a-a4a9e8898654 00:12:18.303 20:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:18.561 00:12:18.561 real 0m19.163s 00:12:18.561 user 0m48.877s 00:12:18.561 sys 0m4.579s 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:18.561 ************************************ 00:12:18.561 END TEST lvs_grow_dirty 00:12:18.561 ************************************ 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:18.561 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:18.561 nvmf_trace.0 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.819 rmmod nvme_tcp 00:12:18.819 rmmod nvme_fabrics 00:12:18.819 rmmod nvme_keyring 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 181339 ']' 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 181339 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 181339 ']' 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 181339 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 181339 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 181339' 00:12:18.819 killing process with pid 181339 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 181339 00:12:18.819 20:11:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 181339 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.078 20:11:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.980 20:11:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.980 00:12:20.980 real 0m42.195s 00:12:20.980 user 1m11.750s 00:12:20.980 sys 0m8.263s 00:12:20.980 20:11:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:20.980 20:11:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:20.980 ************************************ 00:12:20.980 END TEST nvmf_lvs_grow 00:12:20.980 ************************************ 00:12:20.980 20:11:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:20.980 20:11:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:20.980 20:11:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:20.980 20:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.238 ************************************ 00:12:21.238 START TEST nvmf_bdev_io_wait 00:12:21.238 ************************************ 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:21.238 * Looking for test storage... 00:12:21.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.238 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.239 20:11:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:23.142 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:23.142 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:23.142 Found net devices under 0000:09:00.0: cvl_0_0 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.142 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:23.143 Found net devices under 0000:09:00.1: cvl_0_1 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.143 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:12:23.401 00:12:23.401 --- 10.0.0.2 ping statistics --- 00:12:23.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.401 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:12:23.401 00:12:23.401 --- 10.0.0.1 ping statistics --- 00:12:23.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.401 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=183734 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 183734 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 183734 ']' 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:23.401 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.402 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:23.402 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 [2024-05-16 20:11:10.363083] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:23.402 [2024-05-16 20:11:10.363162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.402 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.402 [2024-05-16 20:11:10.427552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.402 [2024-05-16 20:11:10.539764] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.402 [2024-05-16 20:11:10.539823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.402 [2024-05-16 20:11:10.539850] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.402 [2024-05-16 20:11:10.539870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.402 [2024-05-16 20:11:10.539880] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.402 [2024-05-16 20:11:10.539952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.402 [2024-05-16 20:11:10.540070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.402 [2024-05-16 20:11:10.540118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.402 [2024-05-16 20:11:10.540121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.660 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 [2024-05-16 20:11:10.691427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 Malloc0 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.661 [2024-05-16 20:11:10.750247] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:23.661 [2024-05-16 20:11:10.750532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=183886 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=183888 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.661 { 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme$subsystem", 00:12:23.661 "trtype": "$TEST_TRANSPORT", 00:12:23.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "$NVMF_PORT", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.661 "hdgst": ${hdgst:-false}, 00:12:23.661 "ddgst": ${ddgst:-false} 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 } 00:12:23.661 EOF 00:12:23.661 )") 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=183890 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.661 { 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme$subsystem", 00:12:23.661 "trtype": "$TEST_TRANSPORT", 00:12:23.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "$NVMF_PORT", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.661 "hdgst": ${hdgst:-false}, 00:12:23.661 "ddgst": ${ddgst:-false} 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 } 00:12:23.661 EOF 00:12:23.661 )") 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=183893 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.661 { 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme$subsystem", 00:12:23.661 "trtype": "$TEST_TRANSPORT", 00:12:23.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "$NVMF_PORT", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.661 "hdgst": ${hdgst:-false}, 00:12:23.661 "ddgst": ${ddgst:-false} 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 } 00:12:23.661 EOF 00:12:23.661 )") 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.661 { 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme$subsystem", 00:12:23.661 "trtype": "$TEST_TRANSPORT", 00:12:23.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "$NVMF_PORT", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.661 "hdgst": ${hdgst:-false}, 00:12:23.661 "ddgst": ${ddgst:-false} 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 } 00:12:23.661 EOF 00:12:23.661 )") 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 183886 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme1", 00:12:23.661 "trtype": "tcp", 00:12:23.661 "traddr": "10.0.0.2", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "4420", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.661 "hdgst": false, 00:12:23.661 "ddgst": false 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 }' 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme1", 00:12:23.661 "trtype": "tcp", 00:12:23.661 "traddr": "10.0.0.2", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "4420", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.661 "hdgst": false, 00:12:23.661 "ddgst": false 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 }' 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.661 "params": { 00:12:23.661 "name": "Nvme1", 00:12:23.661 "trtype": "tcp", 00:12:23.661 "traddr": "10.0.0.2", 00:12:23.661 "adrfam": "ipv4", 00:12:23.661 "trsvcid": "4420", 00:12:23.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.661 "hdgst": false, 00:12:23.661 "ddgst": false 00:12:23.661 }, 00:12:23.661 "method": "bdev_nvme_attach_controller" 00:12:23.661 }' 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:23.661 20:11:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.661 "params": { 00:12:23.662 "name": "Nvme1", 00:12:23.662 "trtype": "tcp", 00:12:23.662 "traddr": "10.0.0.2", 00:12:23.662 "adrfam": "ipv4", 00:12:23.662 "trsvcid": "4420", 00:12:23.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.662 "hdgst": false, 00:12:23.662 "ddgst": false 00:12:23.662 }, 00:12:23.662 "method": "bdev_nvme_attach_controller" 00:12:23.662 }' 00:12:23.662 [2024-05-16 20:11:10.796199] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:23.662 [2024-05-16 20:11:10.796199] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:23.662 [2024-05-16 20:11:10.796283] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-16 20:11:10.796283] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:23.662 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:23.662 [2024-05-16 20:11:10.796626] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:23.662 [2024-05-16 20:11:10.796627] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:23.662 [2024-05-16 20:11:10.796708] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-16 20:11:10.796709] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:23.662 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:23.919 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.919 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.919 [2024-05-16 20:11:10.954503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.919 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.919 [2024-05-16 20:11:11.027979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.919 [2024-05-16 20:11:11.049583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:24.177 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.177 [2024-05-16 20:11:11.122741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:24.177 [2024-05-16 20:11:11.128806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.177 [2024-05-16 20:11:11.229861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:24.177 [2024-05-16 20:11:11.233932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.434 [2024-05-16 20:11:11.336098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:24.434 Running I/O for 1 seconds... 00:12:24.434 Running I/O for 1 seconds... 00:12:24.434 Running I/O for 1 seconds... 00:12:24.692 Running I/O for 1 seconds... 00:12:25.625 00:12:25.625 Latency(us) 00:12:25.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.625 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:25.625 Nvme1n1 : 1.01 12391.25 48.40 0.00 0.00 10295.14 5679.79 19709.35 00:12:25.625 =================================================================================================================== 00:12:25.626 Total : 12391.25 48.40 0.00 0.00 10295.14 5679.79 19709.35 00:12:25.626 00:12:25.626 Latency(us) 00:12:25.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.626 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:25.626 Nvme1n1 : 1.00 199555.75 779.51 0.00 0.00 638.64 267.00 794.93 00:12:25.626 =================================================================================================================== 00:12:25.626 Total : 199555.75 779.51 0.00 0.00 638.64 267.00 794.93 00:12:25.626 00:12:25.626 Latency(us) 00:12:25.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.626 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:25.626 Nvme1n1 : 1.01 7128.40 27.85 0.00 0.00 17853.66 2014.63 20583.16 00:12:25.626 =================================================================================================================== 00:12:25.626 Total : 7128.40 27.85 0.00 0.00 17853.66 2014.63 20583.16 00:12:25.626 00:12:25.626 Latency(us) 00:12:25.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.626 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:25.626 Nvme1n1 : 1.01 7323.63 28.61 0.00 0.00 17389.49 8883.77 30680.56 00:12:25.626 =================================================================================================================== 00:12:25.626 Total : 7323.63 28.61 0.00 0.00 17389.49 8883.77 30680.56 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 183888 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 183890 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 183893 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.883 rmmod nvme_tcp 00:12:25.883 rmmod nvme_fabrics 00:12:25.883 rmmod nvme_keyring 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 183734 ']' 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 183734 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 183734 ']' 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 183734 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 183734 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 183734' 00:12:25.883 killing process with pid 183734 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 183734 00:12:25.883 [2024-05-16 20:11:12.998905] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:25.883 20:11:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 183734 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.142 20:11:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.670 20:11:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.670 00:12:28.670 real 0m7.144s 00:12:28.670 user 0m16.218s 00:12:28.670 sys 0m3.688s 00:12:28.670 20:11:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.670 20:11:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 ************************************ 00:12:28.670 END TEST nvmf_bdev_io_wait 00:12:28.670 ************************************ 00:12:28.670 20:11:15 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:28.670 20:11:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:28.670 20:11:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.670 20:11:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.670 ************************************ 00:12:28.670 START TEST nvmf_queue_depth 00:12:28.670 ************************************ 00:12:28.670 20:11:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:28.670 * Looking for test storage... 00:12:28.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.670 20:11:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.670 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:28.670 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.670 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.670 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.671 20:11:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:30.568 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:30.568 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:30.568 Found net devices under 0000:09:00.0: cvl_0_0 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:30.568 Found net devices under 0000:09:00.1: cvl_0_1 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:12:30.568 00:12:30.568 --- 10.0.0.2 ping statistics --- 00:12:30.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.568 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:12:30.568 00:12:30.568 --- 10.0.0.1 ping statistics --- 00:12:30.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.568 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=186109 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 186109 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 186109 ']' 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.568 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:30.569 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.569 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:30.569 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.569 [2024-05-16 20:11:17.574355] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:30.569 [2024-05-16 20:11:17.574449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.569 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.569 [2024-05-16 20:11:17.639074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.826 [2024-05-16 20:11:17.751302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.826 [2024-05-16 20:11:17.751356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.826 [2024-05-16 20:11:17.751399] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.827 [2024-05-16 20:11:17.751416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.827 [2024-05-16 20:11:17.751445] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.827 [2024-05-16 20:11:17.751480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 [2024-05-16 20:11:17.902014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 Malloc0 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.827 [2024-05-16 20:11:17.967427] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:30.827 [2024-05-16 20:11:17.967750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.827 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=186133 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 186133 /var/tmp/bdevperf.sock 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 186133 ']' 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:31.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:31.085 20:11:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:31.085 [2024-05-16 20:11:18.013216] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:31.085 [2024-05-16 20:11:18.013285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186133 ] 00:12:31.085 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.085 [2024-05-16 20:11:18.074595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.085 [2024-05-16 20:11:18.190667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:31.343 NVMe0n1 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.343 20:11:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:31.601 Running I/O for 10 seconds... 00:12:41.565 00:12:41.565 Latency(us) 00:12:41.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.566 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:41.566 Verification LBA range: start 0x0 length 0x4000 00:12:41.566 NVMe0n1 : 10.06 8442.09 32.98 0.00 0.00 120784.34 18155.90 75730.49 00:12:41.566 =================================================================================================================== 00:12:41.566 Total : 8442.09 32.98 0.00 0.00 120784.34 18155.90 75730.49 00:12:41.566 0 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 186133 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 186133 ']' 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 186133 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 186133 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 186133' 00:12:41.566 killing process with pid 186133 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 186133 00:12:41.566 Received shutdown signal, test time was about 10.000000 seconds 00:12:41.566 00:12:41.566 Latency(us) 00:12:41.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.566 =================================================================================================================== 00:12:41.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.566 20:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 186133 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.824 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.824 rmmod nvme_tcp 00:12:41.824 rmmod nvme_fabrics 00:12:42.081 rmmod nvme_keyring 00:12:42.081 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.081 20:11:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 186109 ']' 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 186109 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 186109 ']' 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 186109 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:42.081 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 186109 00:12:42.082 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:42.082 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:42.082 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 186109' 00:12:42.082 killing process with pid 186109 00:12:42.082 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 186109 00:12:42.082 [2024-05-16 20:11:29.032032] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:42.082 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 186109 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.340 20:11:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.239 20:11:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.239 00:12:44.239 real 0m16.017s 00:12:44.239 user 0m22.666s 00:12:44.239 sys 0m2.870s 00:12:44.239 20:11:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:44.239 20:11:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.239 ************************************ 00:12:44.239 END TEST nvmf_queue_depth 00:12:44.239 ************************************ 00:12:44.497 20:11:31 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:44.497 20:11:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:44.497 20:11:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.497 20:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.497 ************************************ 00:12:44.497 START TEST nvmf_target_multipath 00:12:44.497 ************************************ 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:44.497 * Looking for test storage... 00:12:44.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.497 20:11:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.498 20:11:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:46.447 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:46.447 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:46.447 Found net devices under 0000:09:00.0: cvl_0_0 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:46.447 Found net devices under 0000:09:00.1: cvl_0_1 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.447 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:12:46.448 00:12:46.448 --- 10.0.0.2 ping statistics --- 00:12:46.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.448 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:12:46.448 00:12:46.448 --- 10.0.0.1 ping statistics --- 00:12:46.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.448 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.448 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:46.705 only one NIC for nvmf test 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.705 rmmod nvme_tcp 00:12:46.705 rmmod nvme_fabrics 00:12:46.705 rmmod nvme_keyring 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.705 20:11:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:48.605 00:12:48.605 real 0m4.275s 00:12:48.605 user 0m0.826s 00:12:48.605 sys 0m1.455s 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.605 20:11:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:48.605 ************************************ 00:12:48.605 END TEST nvmf_target_multipath 00:12:48.605 ************************************ 00:12:48.605 20:11:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:48.605 20:11:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:48.605 20:11:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.605 20:11:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:48.862 ************************************ 00:12:48.862 START TEST nvmf_zcopy 00:12:48.862 ************************************ 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:48.862 * Looking for test storage... 00:12:48.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:48.862 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.761 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:50.762 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:50.762 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:50.762 Found net devices under 0000:09:00.0: cvl_0_0 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:50.762 Found net devices under 0000:09:00.1: cvl_0_1 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:50.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:12:50.762 00:12:50.762 --- 10.0.0.2 ping statistics --- 00:12:50.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.762 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:12:50.762 00:12:50.762 --- 10.0.0.1 ping statistics --- 00:12:50.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.762 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.762 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=191305 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 191305 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 191305 ']' 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.021 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.021 [2024-05-16 20:11:37.969130] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:51.021 [2024-05-16 20:11:37.969213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.021 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.021 [2024-05-16 20:11:38.036874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.021 [2024-05-16 20:11:38.155031] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.021 [2024-05-16 20:11:38.155086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.021 [2024-05-16 20:11:38.155108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.021 [2024-05-16 20:11:38.155133] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.021 [2024-05-16 20:11:38.155149] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.021 [2024-05-16 20:11:38.155200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.279 [2024-05-16 20:11:38.302385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.279 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 [2024-05-16 20:11:38.318354] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:51.280 [2024-05-16 20:11:38.318633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 malloc0 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.280 { 00:12:51.280 "params": { 00:12:51.280 "name": "Nvme$subsystem", 00:12:51.280 "trtype": "$TEST_TRANSPORT", 00:12:51.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.280 "adrfam": "ipv4", 00:12:51.280 "trsvcid": "$NVMF_PORT", 00:12:51.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.280 "hdgst": ${hdgst:-false}, 00:12:51.280 "ddgst": ${ddgst:-false} 00:12:51.280 }, 00:12:51.280 "method": "bdev_nvme_attach_controller" 00:12:51.280 } 00:12:51.280 EOF 00:12:51.280 )") 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:51.280 20:11:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.280 "params": { 00:12:51.280 "name": "Nvme1", 00:12:51.280 "trtype": "tcp", 00:12:51.280 "traddr": "10.0.0.2", 00:12:51.280 "adrfam": "ipv4", 00:12:51.280 "trsvcid": "4420", 00:12:51.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.280 "hdgst": false, 00:12:51.280 "ddgst": false 00:12:51.280 }, 00:12:51.280 "method": "bdev_nvme_attach_controller" 00:12:51.280 }' 00:12:51.280 [2024-05-16 20:11:38.398542] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:51.280 [2024-05-16 20:11:38.398632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191325 ] 00:12:51.538 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.538 [2024-05-16 20:11:38.461639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.538 [2024-05-16 20:11:38.581805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.796 Running I/O for 10 seconds... 00:13:01.763 00:13:01.763 Latency(us) 00:13:01.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.763 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:01.763 Verification LBA range: start 0x0 length 0x1000 00:13:01.763 Nvme1n1 : 10.01 5730.69 44.77 0.00 0.00 22272.66 4004.98 33593.27 00:13:01.763 =================================================================================================================== 00:13:01.763 Total : 5730.69 44.77 0.00 0.00 22272.66 4004.98 33593.27 00:13:02.021 20:11:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=192628 00:13:02.021 20:11:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:02.021 20:11:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:02.021 20:11:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:02.021 20:11:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:02.281 { 00:13:02.281 "params": { 00:13:02.281 "name": "Nvme$subsystem", 00:13:02.281 "trtype": "$TEST_TRANSPORT", 00:13:02.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:02.281 "adrfam": "ipv4", 00:13:02.281 "trsvcid": "$NVMF_PORT", 00:13:02.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:02.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:02.281 "hdgst": ${hdgst:-false}, 00:13:02.281 "ddgst": ${ddgst:-false} 00:13:02.281 }, 00:13:02.281 "method": "bdev_nvme_attach_controller" 00:13:02.281 } 00:13:02.281 EOF 00:13:02.281 )") 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:02.281 [2024-05-16 20:11:49.172388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.172431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:02.281 20:11:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:02.281 "params": { 00:13:02.281 "name": "Nvme1", 00:13:02.281 "trtype": "tcp", 00:13:02.281 "traddr": "10.0.0.2", 00:13:02.281 "adrfam": "ipv4", 00:13:02.281 "trsvcid": "4420", 00:13:02.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.281 "hdgst": false, 00:13:02.281 "ddgst": false 00:13:02.281 }, 00:13:02.281 "method": "bdev_nvme_attach_controller" 00:13:02.281 }' 00:13:02.281 [2024-05-16 20:11:49.180348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.180379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.188357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.188383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.196372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.196395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.204390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.204411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.210664] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:02.281 [2024-05-16 20:11:49.210755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192628 ] 00:13:02.281 [2024-05-16 20:11:49.212438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.212469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.220456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.220485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.228476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.228504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.236496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.236523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.281 [2024-05-16 20:11:49.244516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.244543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.252538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.252565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.260562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.260588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.268584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.268611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.276604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.276631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.279414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.281 [2024-05-16 20:11:49.284647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.284680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.292682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.281 [2024-05-16 20:11:49.292722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.281 [2024-05-16 20:11:49.300674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.300709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.308697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.308723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.316716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.316743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.324737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.324763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.332760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.332786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.340783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.340809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.348831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.348878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.356833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.356869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.364850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.364898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.372876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.372917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.380896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.380935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.388930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.388952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.396948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.396970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.400233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.282 [2024-05-16 20:11:49.404960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.404983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.412990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.413013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.282 [2024-05-16 20:11:49.421045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.282 [2024-05-16 20:11:49.421082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.540 [2024-05-16 20:11:49.429058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.540 [2024-05-16 20:11:49.429096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.540 [2024-05-16 20:11:49.437073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.540 [2024-05-16 20:11:49.437110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.540 [2024-05-16 20:11:49.445096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.540 [2024-05-16 20:11:49.445154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.540 [2024-05-16 20:11:49.453121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.540 [2024-05-16 20:11:49.453170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.461160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.461203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.469160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.469190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.477183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.477212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.485224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.485262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.493249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.493287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.501254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.501281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.509266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.509292] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.517298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.517328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.525327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.525357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.533340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.533369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.541365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.541395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.549400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.549428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.557403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.557430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.565423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.565449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.573449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.573475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.581473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.581499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.589504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.589532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.597527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.597557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.605545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.605574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.613566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.613593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.657093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.657121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.661734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.661762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 Running I/O for 5 seconds... 00:13:02.541 [2024-05-16 20:11:49.669754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.669780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.541 [2024-05-16 20:11:49.682231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.541 [2024-05-16 20:11:49.682260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.799 [2024-05-16 20:11:49.692967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.799 [2024-05-16 20:11:49.693011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.799 [2024-05-16 20:11:49.704288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.799 [2024-05-16 20:11:49.704320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.799 [2024-05-16 20:11:49.715992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.799 [2024-05-16 20:11:49.716020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.799 [2024-05-16 20:11:49.727374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.799 [2024-05-16 20:11:49.727406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.799 [2024-05-16 20:11:49.739163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.799 [2024-05-16 20:11:49.739195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.799 [2024-05-16 20:11:49.750559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.799 [2024-05-16 20:11:49.750590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.762374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.762405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.773622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.773652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.784938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.784966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.798149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.798180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.809039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.809068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.820386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.820417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.831744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.831784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.843017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.843045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.854325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.854357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.866025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.866054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.877219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.877251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.888689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.888721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.899871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.899916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.911007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.911050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.922368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.922399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.800 [2024-05-16 20:11:49.933954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.800 [2024-05-16 20:11:49.933983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:49.945413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:49.945444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:49.956652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:49.956682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:49.967757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:49.967788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:49.981433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:49.981464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:49.992315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:49.992346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.003535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.003566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.016484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.016514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.026693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.026721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.038780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.038812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.050613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.050664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.062375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.062406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.075489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.075520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.085907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.085935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.058 [2024-05-16 20:11:50.097542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.058 [2024-05-16 20:11:50.097572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.108801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.108831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.120542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.120572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.132050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.132078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.143328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.143359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.154671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.154702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.165807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.165837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.177396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.177427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.189486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.189528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.059 [2024-05-16 20:11:50.200678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.059 [2024-05-16 20:11:50.200711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.212579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.212610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.224362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.224394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.236155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.236182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.247496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.247527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.258931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.258960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.270400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.270440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.283819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.283867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.294648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.294679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.305721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.305752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.317259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.317290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.328240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.328271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.339672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.339703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.350881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.350912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.362530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.362561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.374101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.374131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.385447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.385477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.396818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.396849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.408410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.408442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.419842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.419881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.431545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.431576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.442913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.442944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.317 [2024-05-16 20:11:50.456214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.317 [2024-05-16 20:11:50.456246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.467216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.467247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.478820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.478850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.490286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.490325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.501734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.501764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.512888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.512919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.524075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.524105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.535110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.535141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.546493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.546524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.557809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.557840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.571142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.571173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.582160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.582190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.593296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.593326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.606321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.606352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.616727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.616757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.627806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.627837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.639226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.639257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.650798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.650829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.661796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.661827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.673202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.673233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.684424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.684455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.695530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.695561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.708985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.709024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.576 [2024-05-16 20:11:50.720058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.576 [2024-05-16 20:11:50.720089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.731405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.731435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.742848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.742887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.753900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.753931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.767131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.767162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.777260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.777292] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.788920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.788951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.800605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.800636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.812036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.812066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.823329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.823360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.835054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.835085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.846398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.846429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.857987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.858018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.869074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.869105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.881587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.881618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.892397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.892429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.903720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.903751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.915327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.915358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.926690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.926721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.939622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.939653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.950414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.950444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.961725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.961756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.835 [2024-05-16 20:11:50.972754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.835 [2024-05-16 20:11:50.972784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:50.984259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:50.984290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:50.995578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:50.995609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.006778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.006809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.017824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.017864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.029067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.029099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.040501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.040532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.051986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.052017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.063288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.063319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.074414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.074445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.085816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.085848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.096863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.096900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.108212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.108243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.121050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.121082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.131783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.131814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.142926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.142958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.154665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.154696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.165783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.165814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.177272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.177303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.188899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.188930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.200429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.200460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.211749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.211781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.223245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.223276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.093 [2024-05-16 20:11:51.234378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.093 [2024-05-16 20:11:51.234409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.245630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.245661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.256947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.256978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.268302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.268333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.281300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.281331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.292228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.292260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.303285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.303316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.314435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.314466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.325956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.325988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.337134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.337165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.348325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.348357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.359543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.359574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.371057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.371089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.382535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.382567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.395449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.395477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.405514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.405541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.415904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.415932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.427031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.427060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.439452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.439480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.449488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.449514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.460229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.460256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.471238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.471265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.484308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.484335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.351 [2024-05-16 20:11:51.494223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.351 [2024-05-16 20:11:51.494250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.504955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.504983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.515657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.515684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.526338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.526365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.539189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.539216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.549479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.549506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.560490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.560518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.572937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.572965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.584777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.584804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.594484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.594511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.605364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.605391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.617995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.618022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.627782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.627810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.638159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.638186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.648542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.609 [2024-05-16 20:11:51.648569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.609 [2024-05-16 20:11:51.659010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.659038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.669588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.669615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.680309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.680336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.692954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.692982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.703187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.703214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.713651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.713678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.724245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.724273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.734999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.735027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.610 [2024-05-16 20:11:51.747639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.610 [2024-05-16 20:11:51.747666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.757792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.757819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.768457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.768492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.781052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.781080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.790372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.790399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.803481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.803509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.813987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.814030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.825489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.825520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.837303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.837334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.848803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.848834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.860489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.860521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.872069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.872097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.883780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.883810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.895106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.895149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.908085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.908113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.918357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.918389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.930478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.930508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.941667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.941697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.953088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.953117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.964684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.964715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.975777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.975808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:51.986896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:51.986931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.868 [2024-05-16 20:11:52.005287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.868 [2024-05-16 20:11:52.005319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.016503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.016534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.027337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.027369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.038683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.038714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.051863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.051908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.062173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.062215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.073982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.074010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.085166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.085193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.096531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.096562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.108256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.108287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.119525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.119555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.130749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.130780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.142059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.142087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.153468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.153498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.165081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.165109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.176461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.176491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.187710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.187741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.199027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.199055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.212458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.212497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.223792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.223823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.235292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.235323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.246978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.247005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.258644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.258675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.127 [2024-05-16 20:11:52.270265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.127 [2024-05-16 20:11:52.270296] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.281893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.281937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.293289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.293320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.304624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.304656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.315795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.315826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.327179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.327210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.338476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.338507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.349716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.349748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.361082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.361110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.373272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.373305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.386504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.386535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.397259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.397291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.408819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.408849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.420553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.420584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.433757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.433798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.444615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.444648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.455588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.455620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.466966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.466995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.478187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.478232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.489680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.489712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.501085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.501114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.512382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.512420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.386 [2024-05-16 20:11:52.523814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.386 [2024-05-16 20:11:52.523845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.535049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.535077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.546832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.546872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.558090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.558118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.568924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.568953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.580058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.580087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.591505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.591535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.603024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.603052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.614402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.614433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.625518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.625549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.637035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.637063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.648801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.648832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.660679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.660710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.672387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.672418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.683525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.683556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.694693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.694724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.705637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.705668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.716716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.716747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.728198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.728229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.739921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.739949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.751360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.751391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.762708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.762738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.774084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.774112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.645 [2024-05-16 20:11:52.785406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.645 [2024-05-16 20:11:52.785437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.797149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.797176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.808595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.808626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.819582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.819609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.831170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.831198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.842565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.842597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.856049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.856077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.866095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.866123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.878009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.878037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.889417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.889446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.902423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.902450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.912418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.912447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.922874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.922902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.933118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.933146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.943629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.943656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.954460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.954487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.967000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.967028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.977478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.977505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.988045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.988073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:52.998989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:52.999018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:53.011177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:53.011204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:53.021441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:53.021468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:53.031767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:53.031795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.905 [2024-05-16 20:11:53.041937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.905 [2024-05-16 20:11:53.041965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.052611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.052638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.063132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.063174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.073809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.073850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.084286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.084314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.094607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.094634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.104710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.104737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.115472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.115498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.127758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.127785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.137620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.137647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.148226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.148253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.158723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.158750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.169728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.169755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.180360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.180387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.192618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.192645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.202536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.202563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.164 [2024-05-16 20:11:53.213462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.164 [2024-05-16 20:11:53.213490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.223789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.223816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.234702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.234729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.248287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.248314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.258888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.258916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.269417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.269444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.280414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.280441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.291253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.291281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.165 [2024-05-16 20:11:53.303895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.165 [2024-05-16 20:11:53.303922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.313661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.313688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.324404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.324431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.335388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.335420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.346776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.346807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.358239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.358270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.369511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.369542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.380748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.380778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.392000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.392027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.403299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.403330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.414720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.414751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.425862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.425914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.439422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.439454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.450368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.450400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.461799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.461831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.474998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.475026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.485795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.485834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.497097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.497144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.508243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.508275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.518827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.518866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.530104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.530131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.541391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.541421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.552728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.552759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.423 [2024-05-16 20:11:53.563925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.423 [2024-05-16 20:11:53.563953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.575542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.575572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.587333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.587365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.598476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.598508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.609795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.609827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.622773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.622806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.633483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.633514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.645119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.645147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.656830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.656873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.668271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.668303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.680072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.680101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.691679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.691710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.703136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.703187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.714319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.714350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.725978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.726014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.737335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.737366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.748708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.748738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.759556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.759586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.770545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.770576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.781828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.781868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.792976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.793004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.804004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.804033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.681 [2024-05-16 20:11:53.815113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.681 [2024-05-16 20:11:53.815140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.938 [2024-05-16 20:11:53.826484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.938 [2024-05-16 20:11:53.826515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.938 [2024-05-16 20:11:53.837909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.938 [2024-05-16 20:11:53.837937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.849188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.849219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.860512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.860543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.871783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.871814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.883099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.883127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.894373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.894403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.907903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.907931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.918423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.918462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.929367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.929400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.940773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.940804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.954045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.954073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.964602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.964633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.976449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.976480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.987884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.987928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:53.999321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:53.999349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.010878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.010921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.022299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.022331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.033541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.033573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.044385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.044416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.055693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.055723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.066680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.066711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.939 [2024-05-16 20:11:54.078253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.939 [2024-05-16 20:11:54.078284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.089378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.089409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.102346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.102377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.112987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.113015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.124796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.124827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.136484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.136523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.158370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.158403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.169116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.169158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.180409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.180441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.191801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.191832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.203036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.203064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.214616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.214647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.225869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.225917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.197 [2024-05-16 20:11:54.237531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.197 [2024-05-16 20:11:54.237563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.248515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.248546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.259768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.259810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.272900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.272928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.282666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.282698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.295257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.295288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.306195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.306226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.317515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.317545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.328868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.328913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.198 [2024-05-16 20:11:54.340190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.198 [2024-05-16 20:11:54.340235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.455 [2024-05-16 20:11:54.351558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.455 [2024-05-16 20:11:54.351590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.455 [2024-05-16 20:11:54.363006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.455 [2024-05-16 20:11:54.363034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.455 [2024-05-16 20:11:54.374627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.455 [2024-05-16 20:11:54.374658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.455 [2024-05-16 20:11:54.386044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.455 [2024-05-16 20:11:54.386072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.455 [2024-05-16 20:11:54.397390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.397421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.409045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.409074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.420328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.420359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.431441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.431472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.443141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.443172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.454451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.454482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.465932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.465960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.477584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.477614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.489115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.489161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.500620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.500651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.512244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.512275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.523675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.523706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.534933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.534961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.546582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.546614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.558216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.558247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.569749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.569780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.581185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.581228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.456 [2024-05-16 20:11:54.592466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.456 [2024-05-16 20:11:54.592498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.603327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.603370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.614446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.614477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.625792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.625823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.637284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.637316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.648630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.648661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.660018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.660047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.671303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.671335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.682484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.682514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.691699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.691729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 00:13:07.714 Latency(us) 00:13:07.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.714 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:07.714 Nvme1n1 : 5.01 11313.85 88.39 0.00 0.00 11298.32 4927.34 20194.80 00:13:07.714 =================================================================================================================== 00:13:07.714 Total : 11313.85 88.39 0.00 0.00 11298.32 4927.34 20194.80 00:13:07.714 [2024-05-16 20:11:54.698830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.698869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.706850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.706888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.714865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.714888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.722958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.723011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.730982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.731034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.738997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.739047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.746998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.747047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.755032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.755083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.763048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.763098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.771071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.771120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.779093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.779145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.787117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.787167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.795138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.795188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.803160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.803210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.811181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.811229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.819208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.819262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.827176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.827214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.835194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.835234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.843235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.843261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.714 [2024-05-16 20:11:54.851250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.714 [2024-05-16 20:11:54.851276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.859297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.859334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.867330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.867378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.875360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.875414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.883333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.883371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.891353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.891380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.899374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.899400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.907395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.907420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.915431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.915466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.923487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.923536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.931515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.931566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.939486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.939513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.947508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.947534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 [2024-05-16 20:11:54.955530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.973 [2024-05-16 20:11:54.955558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (192628) - No such process 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 192628 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.973 delay0 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.973 20:11:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:07.973 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.973 [2024-05-16 20:11:55.080876] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:14.532 Initializing NVMe Controllers 00:13:14.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:14.532 Initialization complete. Launching workers. 00:13:14.532 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 284 00:13:14.532 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 571, failed to submit 33 00:13:14.532 success 408, unsuccess 163, failed 0 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.532 rmmod nvme_tcp 00:13:14.532 rmmod nvme_fabrics 00:13:14.532 rmmod nvme_keyring 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 191305 ']' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 191305 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 191305 ']' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 191305 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 191305 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 191305' 00:13:14.532 killing process with pid 191305 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 191305 00:13:14.532 [2024-05-16 20:12:01.280317] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 191305 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.532 20:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.057 20:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.057 00:13:17.057 real 0m27.834s 00:13:17.057 user 0m41.843s 00:13:17.057 sys 0m7.597s 00:13:17.057 20:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.057 20:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.057 ************************************ 00:13:17.057 END TEST nvmf_zcopy 00:13:17.057 ************************************ 00:13:17.057 20:12:03 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:17.057 20:12:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:17.057 20:12:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.057 20:12:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.057 ************************************ 00:13:17.057 START TEST nvmf_nmic 00:13:17.057 ************************************ 00:13:17.057 20:12:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:17.057 * Looking for test storage... 00:13:17.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.057 20:12:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.057 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:17.057 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.057 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.058 20:12:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:18.430 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:18.431 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:18.431 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:18.431 Found net devices under 0000:09:00.0: cvl_0_0 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:18.431 Found net devices under 0000:09:00.1: cvl_0_1 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.431 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:18.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:18.689 00:13:18.689 --- 10.0.0.2 ping statistics --- 00:13:18.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.689 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:13:18.689 00:13:18.689 --- 10.0.0.1 ping statistics --- 00:13:18.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.689 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=196031 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 196031 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 196031 ']' 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:18.689 20:12:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 [2024-05-16 20:12:05.727564] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:18.689 [2024-05-16 20:12:05.727642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.689 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.689 [2024-05-16 20:12:05.805995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.946 [2024-05-16 20:12:05.922296] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.947 [2024-05-16 20:12:05.922357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.947 [2024-05-16 20:12:05.922371] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.947 [2024-05-16 20:12:05.922383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.947 [2024-05-16 20:12:05.922393] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.947 [2024-05-16 20:12:05.922497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.947 [2024-05-16 20:12:05.922524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.947 [2024-05-16 20:12:05.922559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.947 [2024-05-16 20:12:05.922560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.880 [2024-05-16 20:12:06.711680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.880 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 Malloc0 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 [2024-05-16 20:12:06.764500] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:19.881 [2024-05-16 20:12:06.764807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:19.881 test case1: single bdev can't be used in multiple subsystems 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 [2024-05-16 20:12:06.788605] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:19.881 [2024-05-16 20:12:06.788633] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:19.881 [2024-05-16 20:12:06.788664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.881 request: 00:13:19.881 { 00:13:19.881 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:19.881 "namespace": { 00:13:19.881 "bdev_name": "Malloc0", 00:13:19.881 "no_auto_visible": false 00:13:19.881 }, 00:13:19.881 "method": "nvmf_subsystem_add_ns", 00:13:19.881 "req_id": 1 00:13:19.881 } 00:13:19.881 Got JSON-RPC error response 00:13:19.881 response: 00:13:19.881 { 00:13:19.881 "code": -32602, 00:13:19.881 "message": "Invalid parameters" 00:13:19.881 } 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:19.881 Adding namespace failed - expected result. 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:19.881 test case2: host connect to nvmf target in multiple paths 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 [2024-05-16 20:12:06.796730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.881 20:12:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.446 20:12:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:21.011 20:12:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.011 20:12:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:13:21.011 20:12:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.011 20:12:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:21.011 20:12:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:13:22.905 20:12:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:22.905 20:12:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:22.905 20:12:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.905 20:12:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:22.905 20:12:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.905 20:12:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:13:22.905 20:12:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:22.905 [global] 00:13:22.905 thread=1 00:13:22.905 invalidate=1 00:13:22.905 rw=write 00:13:22.905 time_based=1 00:13:22.905 runtime=1 00:13:22.905 ioengine=libaio 00:13:22.905 direct=1 00:13:22.905 bs=4096 00:13:22.905 iodepth=1 00:13:22.905 norandommap=0 00:13:22.905 numjobs=1 00:13:22.905 00:13:22.905 verify_dump=1 00:13:22.905 verify_backlog=512 00:13:22.905 verify_state_save=0 00:13:22.905 do_verify=1 00:13:22.905 verify=crc32c-intel 00:13:22.905 [job0] 00:13:22.905 filename=/dev/nvme0n1 00:13:22.905 Could not set queue depth (nvme0n1) 00:13:23.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.474 fio-3.35 00:13:23.474 Starting 1 thread 00:13:24.408 00:13:24.408 job0: (groupid=0, jobs=1): err= 0: pid=197180: Thu May 16 20:12:11 2024 00:13:24.408 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:13:24.408 slat (nsec): min=8661, max=47603, avg=24186.24, stdev=11057.45 00:13:24.408 clat (usec): min=41527, max=42017, avg=41949.07, stdev=99.53 00:13:24.408 lat (usec): min=41536, max=42035, avg=41973.25, stdev=102.31 00:13:24.408 clat percentiles (usec): 00:13:24.408 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:13:24.408 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:24.408 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:24.408 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:24.408 | 99.99th=[42206] 00:13:24.408 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:13:24.408 slat (usec): min=7, max=29846, avg=69.69, stdev=1318.56 00:13:24.408 clat (usec): min=126, max=298, avg=185.10, stdev=48.52 00:13:24.408 lat (usec): min=135, max=30135, avg=254.79, stdev=1324.11 00:13:24.408 clat percentiles (usec): 00:13:24.408 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:13:24.408 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 174], 00:13:24.408 | 70.00th=[ 215], 80.00th=[ 241], 90.00th=[ 273], 95.00th=[ 277], 00:13:24.408 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 297], 99.95th=[ 297], 00:13:24.408 | 99.99th=[ 297] 00:13:24.408 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:24.408 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:24.408 lat (usec) : 250=84.80%, 500=11.26% 00:13:24.408 lat (msec) : 50=3.94% 00:13:24.408 cpu : usr=0.30%, sys=0.59%, ctx=536, majf=0, minf=2 00:13:24.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.408 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.408 00:13:24.408 Run status group 0 (all jobs): 00:13:24.408 READ: bw=82.8KiB/s (84.7kB/s), 82.8KiB/s-82.8KiB/s (84.7kB/s-84.7kB/s), io=84.0KiB (86.0kB), run=1015-1015msec 00:13:24.408 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:13:24.408 00:13:24.408 Disk stats (read/write): 00:13:24.408 nvme0n1: ios=45/512, merge=0/0, ticks=1716/91, in_queue=1807, util=98.60% 00:13:24.408 20:12:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.666 rmmod nvme_tcp 00:13:24.666 rmmod nvme_fabrics 00:13:24.666 rmmod nvme_keyring 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 196031 ']' 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 196031 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 196031 ']' 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 196031 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196031 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:24.666 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196031' 00:13:24.667 killing process with pid 196031 00:13:24.667 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 196031 00:13:24.667 [2024-05-16 20:12:11.774415] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:24.667 20:12:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 196031 00:13:25.231 20:12:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.231 20:12:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.231 20:12:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.231 20:12:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.232 20:12:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.232 20:12:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.232 20:12:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.232 20:12:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.127 20:12:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.127 00:13:27.127 real 0m10.474s 00:13:27.127 user 0m25.472s 00:13:27.127 sys 0m2.306s 00:13:27.127 20:12:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:27.127 20:12:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.127 ************************************ 00:13:27.127 END TEST nvmf_nmic 00:13:27.127 ************************************ 00:13:27.127 20:12:14 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:27.127 20:12:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:27.127 20:12:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.127 20:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.127 ************************************ 00:13:27.127 START TEST nvmf_fio_target 00:13:27.127 ************************************ 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:27.127 * Looking for test storage... 00:13:27.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.127 20:12:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.128 20:12:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:29.091 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:29.091 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:29.091 Found net devices under 0000:09:00.0: cvl_0_0 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:29.091 Found net devices under 0000:09:00.1: cvl_0_1 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.091 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:13:29.350 00:13:29.350 --- 10.0.0.2 ping statistics --- 00:13:29.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.350 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:13:29.350 00:13:29.350 --- 10.0.0.1 ping statistics --- 00:13:29.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.350 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=199342 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 199342 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 199342 ']' 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:29.350 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.350 [2024-05-16 20:12:16.345665] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:29.350 [2024-05-16 20:12:16.345736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.350 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.350 [2024-05-16 20:12:16.408692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.608 [2024-05-16 20:12:16.519472] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.608 [2024-05-16 20:12:16.519522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.608 [2024-05-16 20:12:16.519550] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.608 [2024-05-16 20:12:16.519561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.608 [2024-05-16 20:12:16.519572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.608 [2024-05-16 20:12:16.519675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.608 [2024-05-16 20:12:16.519704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.608 [2024-05-16 20:12:16.521873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.608 [2024-05-16 20:12:16.521877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.608 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.864 [2024-05-16 20:12:16.899282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.864 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.121 20:12:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:30.121 20:12:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.378 20:12:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:30.378 20:12:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.944 20:12:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:30.944 20:12:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.944 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:30.944 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:31.202 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.459 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:31.459 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.718 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:31.718 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.976 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:31.976 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:32.234 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:32.491 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:32.491 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.749 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:32.749 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.031 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.301 [2024-05-16 20:12:20.336151] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:33.301 [2024-05-16 20:12:20.336492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.301 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:33.581 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:33.857 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.458 20:12:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:34.458 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:13:34.458 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.458 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:13:34.458 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:13:34.458 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:13:36.455 20:12:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:36.455 [global] 00:13:36.455 thread=1 00:13:36.455 invalidate=1 00:13:36.455 rw=write 00:13:36.455 time_based=1 00:13:36.455 runtime=1 00:13:36.455 ioengine=libaio 00:13:36.455 direct=1 00:13:36.455 bs=4096 00:13:36.455 iodepth=1 00:13:36.455 norandommap=0 00:13:36.455 numjobs=1 00:13:36.455 00:13:36.455 verify_dump=1 00:13:36.455 verify_backlog=512 00:13:36.455 verify_state_save=0 00:13:36.455 do_verify=1 00:13:36.455 verify=crc32c-intel 00:13:36.455 [job0] 00:13:36.455 filename=/dev/nvme0n1 00:13:36.455 [job1] 00:13:36.455 filename=/dev/nvme0n2 00:13:36.455 [job2] 00:13:36.455 filename=/dev/nvme0n3 00:13:36.455 [job3] 00:13:36.455 filename=/dev/nvme0n4 00:13:36.455 Could not set queue depth (nvme0n1) 00:13:36.455 Could not set queue depth (nvme0n2) 00:13:36.455 Could not set queue depth (nvme0n3) 00:13:36.455 Could not set queue depth (nvme0n4) 00:13:36.775 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.775 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.775 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.775 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.775 fio-3.35 00:13:36.775 Starting 4 threads 00:13:38.237 00:13:38.237 job0: (groupid=0, jobs=1): err= 0: pid=200355: Thu May 16 20:12:24 2024 00:13:38.237 read: IOPS=1709, BW=6837KiB/s (7001kB/s)(6844KiB/1001msec) 00:13:38.237 slat (nsec): min=5497, max=50539, avg=11985.27, stdev=5792.77 00:13:38.237 clat (usec): min=168, max=41010, avg=308.21, stdev=986.70 00:13:38.237 lat (usec): min=175, max=41017, avg=320.20, stdev=986.62 00:13:38.237 clat percentiles (usec): 00:13:38.237 | 1.00th=[ 184], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 233], 00:13:38.237 | 30.00th=[ 241], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 281], 00:13:38.237 | 70.00th=[ 310], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 388], 00:13:38.237 | 99.00th=[ 449], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[41157], 00:13:38.237 | 99.99th=[41157] 00:13:38.237 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:38.237 slat (usec): min=7, max=837, avg=17.19, stdev=19.72 00:13:38.237 clat (usec): min=130, max=392, avg=196.39, stdev=36.69 00:13:38.237 lat (usec): min=141, max=1013, avg=213.58, stdev=41.03 00:13:38.237 clat percentiles (usec): 00:13:38.237 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 167], 00:13:38.237 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 196], 00:13:38.237 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 269], 00:13:38.237 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 355], 99.95th=[ 363], 00:13:38.237 | 99.99th=[ 392] 00:13:38.237 bw ( KiB/s): min= 8192, max= 8192, per=38.62%, avg=8192.00, stdev= 0.00, samples=1 00:13:38.237 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:38.237 lat (usec) : 250=65.71%, 500=33.95%, 750=0.32% 00:13:38.237 lat (msec) : 50=0.03% 00:13:38.237 cpu : usr=4.90%, sys=6.50%, ctx=3762, majf=0, minf=1 00:13:38.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.237 issued rwts: total=1711,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.237 job1: (groupid=0, jobs=1): err= 0: pid=200360: Thu May 16 20:12:24 2024 00:13:38.237 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:13:38.237 slat (nsec): min=7253, max=49774, avg=20704.68, stdev=9743.61 00:13:38.237 clat (usec): min=40842, max=42064, avg=41436.83, stdev=523.95 00:13:38.237 lat (usec): min=40878, max=42084, avg=41457.53, stdev=523.70 00:13:38.237 clat percentiles (usec): 00:13:38.237 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:38.237 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:38.237 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:38.237 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:38.237 | 99.99th=[42206] 00:13:38.237 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:13:38.237 slat (nsec): min=7523, max=35778, avg=9117.06, stdev=2143.26 00:13:38.237 clat (usec): min=144, max=324, avg=220.04, stdev=18.78 00:13:38.237 lat (usec): min=152, max=360, avg=229.16, stdev=19.02 00:13:38.237 clat percentiles (usec): 00:13:38.237 | 1.00th=[ 159], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 210], 00:13:38.237 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 225], 00:13:38.237 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 245], 00:13:38.237 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 326], 99.95th=[ 326], 00:13:38.237 | 99.99th=[ 326] 00:13:38.237 bw ( KiB/s): min= 4096, max= 4096, per=19.31%, avg=4096.00, stdev= 0.00, samples=1 00:13:38.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:38.237 lat (usec) : 250=93.45%, 500=2.43% 00:13:38.237 lat (msec) : 50=4.12% 00:13:38.237 cpu : usr=0.58%, sys=0.29%, ctx=535, majf=0, minf=1 00:13:38.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.237 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.237 job2: (groupid=0, jobs=1): err= 0: pid=200361: Thu May 16 20:12:24 2024 00:13:38.237 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:13:38.237 slat (nsec): min=6639, max=32877, avg=17908.14, stdev=5549.93 00:13:38.237 clat (usec): min=40846, max=42015, avg=41240.39, stdev=467.68 00:13:38.237 lat (usec): min=40852, max=42029, avg=41258.30, stdev=467.19 00:13:38.237 clat percentiles (usec): 00:13:38.237 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:38.237 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:38.237 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:38.237 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:38.237 | 99.99th=[42206] 00:13:38.237 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:13:38.237 slat (nsec): min=5500, max=43616, avg=7256.12, stdev=2540.48 00:13:38.237 clat (usec): min=154, max=622, avg=190.45, stdev=26.06 00:13:38.237 lat (usec): min=160, max=629, avg=197.71, stdev=26.51 00:13:38.237 clat percentiles (usec): 00:13:38.237 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:13:38.237 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:13:38.237 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:13:38.237 | 99.00th=[ 231], 99.50th=[ 269], 99.90th=[ 619], 99.95th=[ 619], 00:13:38.237 | 99.99th=[ 619] 00:13:38.237 bw ( KiB/s): min= 4096, max= 4096, per=19.31%, avg=4096.00, stdev= 0.00, samples=1 00:13:38.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:38.237 lat (usec) : 250=95.13%, 500=0.56%, 750=0.19% 00:13:38.237 lat (msec) : 50=4.12% 00:13:38.237 cpu : usr=0.20%, sys=0.30%, ctx=534, majf=0, minf=1 00:13:38.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.238 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.238 job3: (groupid=0, jobs=1): err= 0: pid=200362: Thu May 16 20:12:24 2024 00:13:38.238 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:38.238 slat (nsec): min=6214, max=35021, avg=7395.73, stdev=1347.35 00:13:38.238 clat (usec): min=180, max=690, avg=261.09, stdev=49.30 00:13:38.238 lat (usec): min=187, max=698, avg=268.48, stdev=49.57 00:13:38.238 clat percentiles (usec): 00:13:38.238 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 221], 00:13:38.238 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 265], 00:13:38.238 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 363], 00:13:38.238 | 99.00th=[ 404], 99.50th=[ 490], 99.90th=[ 586], 99.95th=[ 586], 00:13:38.238 | 99.99th=[ 693] 00:13:38.238 write: IOPS=2393, BW=9574KiB/s (9804kB/s)(9584KiB/1001msec); 0 zone resets 00:13:38.238 slat (nsec): min=8110, max=58939, avg=9462.31, stdev=1748.07 00:13:38.238 clat (usec): min=127, max=898, avg=174.17, stdev=32.78 00:13:38.238 lat (usec): min=136, max=908, avg=183.63, stdev=32.98 00:13:38.238 clat percentiles (usec): 00:13:38.238 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:13:38.238 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:13:38.238 | 70.00th=[ 180], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:13:38.238 | 99.00th=[ 262], 99.50th=[ 289], 99.90th=[ 570], 99.95th=[ 717], 00:13:38.238 | 99.99th=[ 898] 00:13:38.238 bw ( KiB/s): min=10544, max=10544, per=49.70%, avg=10544.00, stdev= 0.00, samples=1 00:13:38.238 iops : min= 2636, max= 2636, avg=2636.00, stdev= 0.00, samples=1 00:13:38.238 lat (usec) : 250=73.72%, 500=25.97%, 750=0.29%, 1000=0.02% 00:13:38.238 cpu : usr=2.60%, sys=5.20%, ctx=4445, majf=0, minf=2 00:13:38.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.238 issued rwts: total=2048,2396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.238 00:13:38.238 Run status group 0 (all jobs): 00:13:38.238 READ: bw=14.4MiB/s (15.1MB/s), 85.4KiB/s-8184KiB/s (87.4kB/s-8380kB/s), io=14.9MiB (15.6MB), run=1001-1031msec 00:13:38.238 WRITE: bw=20.7MiB/s (21.7MB/s), 1986KiB/s-9574KiB/s (2034kB/s-9804kB/s), io=21.4MiB (22.4MB), run=1001-1031msec 00:13:38.238 00:13:38.238 Disk stats (read/write): 00:13:38.238 nvme0n1: ios=1590/1671, merge=0/0, ticks=684/282, in_queue=966, util=97.09% 00:13:38.238 nvme0n2: ios=44/512, merge=0/0, ticks=1700/110, in_queue=1810, util=97.35% 00:13:38.238 nvme0n3: ios=44/512, merge=0/0, ticks=969/92, in_queue=1061, util=89.68% 00:13:38.238 nvme0n4: ios=1823/2048, merge=0/0, ticks=748/341, in_queue=1089, util=97.56% 00:13:38.238 20:12:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:38.238 [global] 00:13:38.238 thread=1 00:13:38.238 invalidate=1 00:13:38.238 rw=randwrite 00:13:38.238 time_based=1 00:13:38.238 runtime=1 00:13:38.238 ioengine=libaio 00:13:38.238 direct=1 00:13:38.238 bs=4096 00:13:38.238 iodepth=1 00:13:38.238 norandommap=0 00:13:38.238 numjobs=1 00:13:38.238 00:13:38.238 verify_dump=1 00:13:38.238 verify_backlog=512 00:13:38.238 verify_state_save=0 00:13:38.238 do_verify=1 00:13:38.238 verify=crc32c-intel 00:13:38.238 [job0] 00:13:38.238 filename=/dev/nvme0n1 00:13:38.238 [job1] 00:13:38.238 filename=/dev/nvme0n2 00:13:38.238 [job2] 00:13:38.238 filename=/dev/nvme0n3 00:13:38.238 [job3] 00:13:38.238 filename=/dev/nvme0n4 00:13:38.238 Could not set queue depth (nvme0n1) 00:13:38.238 Could not set queue depth (nvme0n2) 00:13:38.238 Could not set queue depth (nvme0n3) 00:13:38.238 Could not set queue depth (nvme0n4) 00:13:38.238 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.238 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.238 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.238 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.238 fio-3.35 00:13:38.238 Starting 4 threads 00:13:39.694 00:13:39.694 job0: (groupid=0, jobs=1): err= 0: pid=200660: Thu May 16 20:12:26 2024 00:13:39.694 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:13:39.694 slat (nsec): min=8193, max=29706, avg=17217.61, stdev=3986.89 00:13:39.694 clat (usec): min=293, max=42003, avg=39515.57, stdev=8563.58 00:13:39.694 lat (usec): min=311, max=42021, avg=39532.79, stdev=8563.41 00:13:39.694 clat percentiles (usec): 00:13:39.694 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:39.694 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:39.695 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:39.695 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.695 | 99.99th=[42206] 00:13:39.695 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:39.695 slat (nsec): min=7333, max=45279, avg=8729.77, stdev=2364.71 00:13:39.695 clat (usec): min=148, max=377, avg=203.30, stdev=33.61 00:13:39.695 lat (usec): min=157, max=386, avg=212.03, stdev=33.83 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:13:39.695 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 198], 60.00th=[ 221], 00:13:39.695 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:13:39.695 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 379], 99.95th=[ 379], 00:13:39.695 | 99.99th=[ 379] 00:13:39.695 bw ( KiB/s): min= 4096, max= 4096, per=15.69%, avg=4096.00, stdev= 0.00, samples=1 00:13:39.695 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:39.695 lat (usec) : 250=90.47%, 500=5.42% 00:13:39.695 lat (msec) : 50=4.11% 00:13:39.695 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=1 00:13:39.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.695 job1: (groupid=0, jobs=1): err= 0: pid=200678: Thu May 16 20:12:26 2024 00:13:39.695 read: IOPS=2279, BW=9119KiB/s (9338kB/s)(9128KiB/1001msec) 00:13:39.695 slat (nsec): min=4257, max=63454, avg=9972.65, stdev=7456.08 00:13:39.695 clat (usec): min=158, max=576, avg=227.15, stdev=71.21 00:13:39.695 lat (usec): min=162, max=584, avg=237.12, stdev=75.87 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:13:39.695 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:13:39.695 | 70.00th=[ 212], 80.00th=[ 281], 90.00th=[ 343], 95.00th=[ 404], 00:13:39.695 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 498], 99.95th=[ 510], 00:13:39.695 | 99.99th=[ 578] 00:13:39.695 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:39.695 slat (nsec): min=5599, max=53184, avg=9751.27, stdev=4611.66 00:13:39.695 clat (usec): min=124, max=359, avg=164.23, stdev=27.19 00:13:39.695 lat (usec): min=131, max=371, avg=173.98, stdev=28.31 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:13:39.695 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:13:39.695 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 217], 00:13:39.695 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 269], 99.95th=[ 277], 00:13:39.695 | 99.99th=[ 359] 00:13:39.695 bw ( KiB/s): min=12288, max=12288, per=47.08%, avg=12288.00, stdev= 0.00, samples=1 00:13:39.695 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:39.695 lat (usec) : 250=88.37%, 500=11.59%, 750=0.04% 00:13:39.695 cpu : usr=2.80%, sys=4.80%, ctx=4845, majf=0, minf=2 00:13:39.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 issued rwts: total=2282,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.695 job2: (groupid=0, jobs=1): err= 0: pid=200717: Thu May 16 20:12:26 2024 00:13:39.695 read: IOPS=1064, BW=4260KiB/s (4362kB/s)(4264KiB/1001msec) 00:13:39.695 slat (nsec): min=5389, max=37078, avg=10590.03, stdev=4847.25 00:13:39.695 clat (usec): min=194, max=42021, avg=637.39, stdev=3883.73 00:13:39.695 lat (usec): min=201, max=42028, avg=647.98, stdev=3884.16 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:13:39.695 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:13:39.695 | 70.00th=[ 258], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 355], 00:13:39.695 | 99.00th=[ 9110], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:39.695 | 99.99th=[42206] 00:13:39.695 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:39.695 slat (nsec): min=6011, max=53113, avg=11444.21, stdev=5260.57 00:13:39.695 clat (usec): min=144, max=492, avg=184.87, stdev=28.73 00:13:39.695 lat (usec): min=153, max=507, avg=196.32, stdev=27.97 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:13:39.695 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:13:39.695 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 233], 95.00th=[ 243], 00:13:39.695 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 392], 99.95th=[ 494], 00:13:39.695 | 99.99th=[ 494] 00:13:39.695 bw ( KiB/s): min= 4096, max= 4096, per=15.69%, avg=4096.00, stdev= 0.00, samples=1 00:13:39.695 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:39.695 lat (usec) : 250=83.82%, 500=15.53%, 750=0.23% 00:13:39.695 lat (msec) : 10=0.04%, 50=0.38% 00:13:39.695 cpu : usr=1.90%, sys=2.60%, ctx=2602, majf=0, minf=1 00:13:39.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 issued rwts: total=1066,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.695 job3: (groupid=0, jobs=1): err= 0: pid=200718: Thu May 16 20:12:26 2024 00:13:39.695 read: IOPS=1945, BW=7780KiB/s (7967kB/s)(7788KiB/1001msec) 00:13:39.695 slat (nsec): min=4615, max=66380, avg=12750.33, stdev=8488.57 00:13:39.695 clat (usec): min=183, max=1008, avg=290.45, stdev=77.76 00:13:39.695 lat (usec): min=191, max=1021, avg=303.20, stdev=81.73 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 225], 00:13:39.695 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 269], 60.00th=[ 297], 00:13:39.695 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 429], 00:13:39.695 | 99.00th=[ 506], 99.50th=[ 578], 99.90th=[ 881], 99.95th=[ 1012], 00:13:39.695 | 99.99th=[ 1012] 00:13:39.695 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:39.695 slat (nsec): min=5984, max=55325, avg=11265.20, stdev=6141.46 00:13:39.695 clat (usec): min=125, max=1445, avg=182.57, stdev=59.51 00:13:39.695 lat (usec): min=132, max=1455, avg=193.84, stdev=61.33 00:13:39.695 clat percentiles (usec): 00:13:39.695 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:13:39.695 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:13:39.695 | 70.00th=[ 188], 80.00th=[ 208], 90.00th=[ 237], 95.00th=[ 251], 00:13:39.695 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 971], 99.95th=[ 1106], 00:13:39.695 | 99.99th=[ 1450] 00:13:39.695 bw ( KiB/s): min=10360, max=10360, per=39.69%, avg=10360.00, stdev= 0.00, samples=1 00:13:39.695 iops : min= 2590, max= 2590, avg=2590.00, stdev= 0.00, samples=1 00:13:39.695 lat (usec) : 250=66.48%, 500=32.87%, 750=0.50%, 1000=0.08% 00:13:39.695 lat (msec) : 2=0.08% 00:13:39.695 cpu : usr=3.10%, sys=4.40%, ctx=3998, majf=0, minf=1 00:13:39.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.695 issued rwts: total=1947,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.695 00:13:39.695 Run status group 0 (all jobs): 00:13:39.695 READ: bw=20.4MiB/s (21.4MB/s), 90.2KiB/s-9119KiB/s (92.4kB/s-9338kB/s), io=20.8MiB (21.8MB), run=1001-1020msec 00:13:39.695 WRITE: bw=25.5MiB/s (26.7MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1020msec 00:13:39.695 00:13:39.695 Disk stats (read/write): 00:13:39.695 nvme0n1: ios=68/512, merge=0/0, ticks=728/103, in_queue=831, util=86.17% 00:13:39.695 nvme0n2: ios=2097/2061, merge=0/0, ticks=1092/318, in_queue=1410, util=89.11% 00:13:39.695 nvme0n3: ios=994/1024, merge=0/0, ticks=698/185, in_queue=883, util=94.44% 00:13:39.695 nvme0n4: ios=1593/2017, merge=0/0, ticks=596/348, in_queue=944, util=94.18% 00:13:39.695 20:12:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:39.695 [global] 00:13:39.695 thread=1 00:13:39.695 invalidate=1 00:13:39.695 rw=write 00:13:39.695 time_based=1 00:13:39.695 runtime=1 00:13:39.695 ioengine=libaio 00:13:39.695 direct=1 00:13:39.695 bs=4096 00:13:39.695 iodepth=128 00:13:39.695 norandommap=0 00:13:39.695 numjobs=1 00:13:39.695 00:13:39.695 verify_dump=1 00:13:39.695 verify_backlog=512 00:13:39.695 verify_state_save=0 00:13:39.695 do_verify=1 00:13:39.695 verify=crc32c-intel 00:13:39.695 [job0] 00:13:39.695 filename=/dev/nvme0n1 00:13:39.695 [job1] 00:13:39.695 filename=/dev/nvme0n2 00:13:39.695 [job2] 00:13:39.695 filename=/dev/nvme0n3 00:13:39.695 [job3] 00:13:39.695 filename=/dev/nvme0n4 00:13:39.695 Could not set queue depth (nvme0n1) 00:13:39.695 Could not set queue depth (nvme0n2) 00:13:39.695 Could not set queue depth (nvme0n3) 00:13:39.695 Could not set queue depth (nvme0n4) 00:13:39.695 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.695 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.695 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.695 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.695 fio-3.35 00:13:39.695 Starting 4 threads 00:13:41.151 00:13:41.151 job0: (groupid=0, jobs=1): err= 0: pid=200957: Thu May 16 20:12:27 2024 00:13:41.151 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:13:41.151 slat (usec): min=3, max=13898, avg=147.23, stdev=955.10 00:13:41.151 clat (usec): min=6619, max=62950, avg=17960.16, stdev=6722.30 00:13:41.151 lat (usec): min=6636, max=62964, avg=18107.39, stdev=6805.81 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 8586], 5.00th=[11469], 10.00th=[12125], 20.00th=[13042], 00:13:41.151 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16712], 60.00th=[17171], 00:13:41.151 | 70.00th=[19006], 80.00th=[21365], 90.00th=[24511], 95.00th=[28443], 00:13:41.151 | 99.00th=[51643], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:13:41.151 | 99.99th=[63177] 00:13:41.151 write: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(11.4MiB/1012msec); 0 zone resets 00:13:41.151 slat (usec): min=3, max=17002, avg=203.85, stdev=995.29 00:13:41.151 clat (usec): min=1488, max=81968, avg=28314.71, stdev=19234.89 00:13:41.151 lat (usec): min=1511, max=81984, avg=28518.55, stdev=19368.68 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[13042], 00:13:41.151 | 30.00th=[15533], 40.00th=[17957], 50.00th=[22152], 60.00th=[24511], 00:13:41.151 | 70.00th=[31327], 80.00th=[43254], 90.00th=[62653], 95.00th=[71828], 00:13:41.151 | 99.00th=[79168], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:13:41.151 | 99.99th=[82314] 00:13:41.151 bw ( KiB/s): min= 9800, max=12464, per=17.99%, avg=11132.00, stdev=1883.73, samples=2 00:13:41.151 iops : min= 2450, max= 3116, avg=2783.00, stdev=470.93, samples=2 00:13:41.151 lat (msec) : 2=0.02%, 10=3.73%, 20=55.32%, 50=32.52%, 100=8.41% 00:13:41.151 cpu : usr=4.35%, sys=5.54%, ctx=292, majf=0, minf=1 00:13:41.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:41.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.151 issued rwts: total=2560,2910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.151 job1: (groupid=0, jobs=1): err= 0: pid=200960: Thu May 16 20:12:27 2024 00:13:41.151 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:13:41.151 slat (usec): min=2, max=26765, avg=99.73, stdev=763.22 00:13:41.151 clat (usec): min=4426, max=53687, avg=12749.02, stdev=6163.85 00:13:41.151 lat (usec): min=4437, max=62317, avg=12848.75, stdev=6229.58 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10028], 00:13:41.151 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:13:41.151 | 70.00th=[11863], 80.00th=[13829], 90.00th=[19006], 95.00th=[20317], 00:13:41.151 | 99.00th=[50594], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:13:41.151 | 99.99th=[53740] 00:13:41.151 write: IOPS=5046, BW=19.7MiB/s (20.7MB/s)(19.9MiB/1011msec); 0 zone resets 00:13:41.151 slat (usec): min=3, max=33087, avg=93.57, stdev=775.75 00:13:41.151 clat (usec): min=854, max=66422, avg=13506.64, stdev=8554.60 00:13:41.151 lat (usec): min=864, max=66445, avg=13600.21, stdev=8626.71 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 7832], 20.00th=[ 9634], 00:13:41.151 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11469], 00:13:41.151 | 70.00th=[11863], 80.00th=[14877], 90.00th=[21103], 95.00th=[29754], 00:13:41.151 | 99.00th=[54264], 99.50th=[54264], 99.90th=[57410], 99.95th=[57410], 00:13:41.151 | 99.99th=[66323] 00:13:41.151 bw ( KiB/s): min=18360, max=21440, per=32.15%, avg=19900.00, stdev=2177.89, samples=2 00:13:41.151 iops : min= 4590, max= 5360, avg=4975.00, stdev=544.47, samples=2 00:13:41.151 lat (usec) : 1000=0.06% 00:13:41.151 lat (msec) : 4=0.21%, 10=21.16%, 20=69.90%, 50=6.97%, 100=1.70% 00:13:41.151 cpu : usr=3.86%, sys=6.63%, ctx=428, majf=0, minf=1 00:13:41.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:41.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.151 issued rwts: total=4608,5102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.151 job2: (groupid=0, jobs=1): err= 0: pid=200963: Thu May 16 20:12:27 2024 00:13:41.151 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:13:41.151 slat (usec): min=2, max=11618, avg=96.63, stdev=628.07 00:13:41.151 clat (usec): min=4224, max=47352, avg=12846.64, stdev=4364.03 00:13:41.151 lat (usec): min=4232, max=47362, avg=12943.26, stdev=4392.08 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 6390], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10683], 00:13:41.151 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12256], 60.00th=[12649], 00:13:41.151 | 70.00th=[12911], 80.00th=[14091], 90.00th=[16188], 95.00th=[18482], 00:13:41.151 | 99.00th=[38011], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:13:41.151 | 99.99th=[47449] 00:13:41.151 write: IOPS=5036, BW=19.7MiB/s (20.6MB/s)(19.9MiB/1013msec); 0 zone resets 00:13:41.151 slat (usec): min=3, max=18838, avg=100.06, stdev=683.21 00:13:41.151 clat (usec): min=1212, max=54113, avg=13511.79, stdev=6151.47 00:13:41.151 lat (usec): min=1224, max=54124, avg=13611.85, stdev=6201.23 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 4293], 5.00th=[ 7177], 10.00th=[ 9110], 20.00th=[10421], 00:13:41.151 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11863], 60.00th=[12649], 00:13:41.151 | 70.00th=[13435], 80.00th=[13698], 90.00th=[22938], 95.00th=[25297], 00:13:41.151 | 99.00th=[37487], 99.50th=[47973], 99.90th=[54264], 99.95th=[54264], 00:13:41.151 | 99.99th=[54264] 00:13:41.151 bw ( KiB/s): min=17136, max=22664, per=32.15%, avg=19900.00, stdev=3908.89, samples=2 00:13:41.151 iops : min= 4284, max= 5666, avg=4975.00, stdev=977.22, samples=2 00:13:41.151 lat (msec) : 2=0.26%, 4=0.23%, 10=10.43%, 20=79.62%, 50=9.22% 00:13:41.151 lat (msec) : 100=0.25% 00:13:41.151 cpu : usr=5.14%, sys=8.40%, ctx=421, majf=0, minf=1 00:13:41.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:41.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.151 issued rwts: total=4608,5102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.151 job3: (groupid=0, jobs=1): err= 0: pid=200964: Thu May 16 20:12:27 2024 00:13:41.151 read: IOPS=2338, BW=9353KiB/s (9578kB/s)(9400KiB/1005msec) 00:13:41.151 slat (usec): min=3, max=24679, avg=210.99, stdev=1410.09 00:13:41.151 clat (usec): min=3978, max=86770, avg=26307.66, stdev=13823.80 00:13:41.151 lat (usec): min=3993, max=86786, avg=26518.66, stdev=13950.77 00:13:41.151 clat percentiles (usec): 00:13:41.151 | 1.00th=[ 6390], 5.00th=[11863], 10.00th=[14091], 20.00th=[15008], 00:13:41.151 | 30.00th=[18744], 40.00th=[20579], 50.00th=[22676], 60.00th=[25560], 00:13:41.152 | 70.00th=[29754], 80.00th=[33162], 90.00th=[41681], 95.00th=[53740], 00:13:41.152 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:13:41.152 | 99.99th=[86508] 00:13:41.152 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:13:41.152 slat (usec): min=4, max=23339, avg=180.12, stdev=1236.05 00:13:41.152 clat (usec): min=1434, max=86675, avg=25568.57, stdev=17684.58 00:13:41.152 lat (usec): min=1447, max=86683, avg=25748.69, stdev=17818.43 00:13:41.152 clat percentiles (usec): 00:13:41.152 | 1.00th=[ 6456], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[11863], 00:13:41.152 | 30.00th=[12125], 40.00th=[12518], 50.00th=[22938], 60.00th=[24511], 00:13:41.152 | 70.00th=[28705], 80.00th=[34341], 90.00th=[54789], 95.00th=[66323], 00:13:41.152 | 99.00th=[81265], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:13:41.152 | 99.99th=[86508] 00:13:41.152 bw ( KiB/s): min= 8192, max=12288, per=16.55%, avg=10240.00, stdev=2896.31, samples=2 00:13:41.152 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:13:41.152 lat (msec) : 2=0.10%, 4=0.02%, 10=5.89%, 20=34.89%, 50=49.10% 00:13:41.152 lat (msec) : 100=10.00% 00:13:41.152 cpu : usr=3.09%, sys=5.98%, ctx=229, majf=0, minf=1 00:13:41.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:41.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.152 issued rwts: total=2350,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.152 00:13:41.152 Run status group 0 (all jobs): 00:13:41.152 READ: bw=54.5MiB/s (57.1MB/s), 9353KiB/s-17.8MiB/s (9578kB/s-18.7MB/s), io=55.2MiB (57.9MB), run=1005-1013msec 00:13:41.152 WRITE: bw=60.4MiB/s (63.4MB/s), 9.95MiB/s-19.7MiB/s (10.4MB/s-20.7MB/s), io=61.2MiB (64.2MB), run=1005-1013msec 00:13:41.152 00:13:41.152 Disk stats (read/write): 00:13:41.152 nvme0n1: ios=2082/2311, merge=0/0, ticks=37158/66547, in_queue=103705, util=90.78% 00:13:41.152 nvme0n2: ios=3739/4096, merge=0/0, ticks=24203/24800, in_queue=49003, util=97.76% 00:13:41.152 nvme0n3: ios=3962/4096, merge=0/0, ticks=31694/30001, in_queue=61695, util=88.91% 00:13:41.152 nvme0n4: ios=2070/2175, merge=0/0, ticks=25791/24890, in_queue=50681, util=97.57% 00:13:41.152 20:12:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:41.152 [global] 00:13:41.152 thread=1 00:13:41.152 invalidate=1 00:13:41.152 rw=randwrite 00:13:41.152 time_based=1 00:13:41.152 runtime=1 00:13:41.152 ioengine=libaio 00:13:41.152 direct=1 00:13:41.152 bs=4096 00:13:41.152 iodepth=128 00:13:41.152 norandommap=0 00:13:41.152 numjobs=1 00:13:41.152 00:13:41.152 verify_dump=1 00:13:41.152 verify_backlog=512 00:13:41.152 verify_state_save=0 00:13:41.152 do_verify=1 00:13:41.152 verify=crc32c-intel 00:13:41.152 [job0] 00:13:41.152 filename=/dev/nvme0n1 00:13:41.152 [job1] 00:13:41.152 filename=/dev/nvme0n2 00:13:41.152 [job2] 00:13:41.152 filename=/dev/nvme0n3 00:13:41.152 [job3] 00:13:41.152 filename=/dev/nvme0n4 00:13:41.152 Could not set queue depth (nvme0n1) 00:13:41.152 Could not set queue depth (nvme0n2) 00:13:41.152 Could not set queue depth (nvme0n3) 00:13:41.152 Could not set queue depth (nvme0n4) 00:13:41.152 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.152 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.152 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.152 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.152 fio-3.35 00:13:41.152 Starting 4 threads 00:13:42.630 00:13:42.631 job0: (groupid=0, jobs=1): err= 0: pid=201194: Thu May 16 20:12:29 2024 00:13:42.631 read: IOPS=4558, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1007msec) 00:13:42.631 slat (usec): min=2, max=21038, avg=116.28, stdev=888.61 00:13:42.631 clat (usec): min=2570, max=43534, avg=14788.74, stdev=6043.14 00:13:42.631 lat (usec): min=4669, max=43539, avg=14905.02, stdev=6101.65 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 6915], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10814], 00:13:42.631 | 30.00th=[11469], 40.00th=[11863], 50.00th=[13173], 60.00th=[13960], 00:13:42.631 | 70.00th=[16450], 80.00th=[18744], 90.00th=[20317], 95.00th=[26084], 00:13:42.631 | 99.00th=[36439], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:13:42.631 | 99.99th=[43779] 00:13:42.631 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:13:42.631 slat (usec): min=4, max=21755, avg=91.35, stdev=675.44 00:13:42.631 clat (usec): min=2939, max=44766, avg=12969.83, stdev=4945.25 00:13:42.631 lat (usec): min=2946, max=44784, avg=13061.18, stdev=5024.25 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[10421], 00:13:42.631 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:13:42.631 | 70.00th=[12125], 80.00th=[12780], 90.00th=[22414], 95.00th=[23200], 00:13:42.631 | 99.00th=[23987], 99.50th=[24249], 99.90th=[42206], 99.95th=[43779], 00:13:42.631 | 99.99th=[44827] 00:13:42.631 bw ( KiB/s): min=16368, max=20496, per=28.40%, avg=18432.00, stdev=2918.94, samples=2 00:13:42.631 iops : min= 4092, max= 5124, avg=4608.00, stdev=729.73, samples=2 00:13:42.631 lat (msec) : 4=0.38%, 10=15.00%, 20=69.59%, 50=15.03% 00:13:42.631 cpu : usr=4.77%, sys=11.13%, ctx=494, majf=0, minf=1 00:13:42.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:42.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.631 issued rwts: total=4590,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.631 job1: (groupid=0, jobs=1): err= 0: pid=201195: Thu May 16 20:12:29 2024 00:13:42.631 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:42.631 slat (usec): min=2, max=22940, avg=117.48, stdev=779.64 00:13:42.631 clat (usec): min=6414, max=51119, avg=15459.04, stdev=6066.24 00:13:42.631 lat (usec): min=6422, max=51154, avg=15576.52, stdev=6117.01 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10945], 00:13:42.631 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13698], 60.00th=[15401], 00:13:42.631 | 70.00th=[17171], 80.00th=[19268], 90.00th=[21103], 95.00th=[26608], 00:13:42.631 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:13:42.631 | 99.99th=[51119] 00:13:42.631 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1003msec); 0 zone resets 00:13:42.631 slat (usec): min=3, max=10731, avg=111.66, stdev=772.46 00:13:42.631 clat (usec): min=2249, max=35886, avg=14419.11, stdev=4504.32 00:13:42.631 lat (usec): min=2969, max=35916, avg=14530.78, stdev=4571.80 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 6194], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10552], 00:13:42.631 | 30.00th=[10945], 40.00th=[12125], 50.00th=[12780], 60.00th=[14353], 00:13:42.631 | 70.00th=[16319], 80.00th=[19530], 90.00th=[21103], 95.00th=[22414], 00:13:42.631 | 99.00th=[25297], 99.50th=[26870], 99.90th=[29754], 99.95th=[32375], 00:13:42.631 | 99.99th=[35914] 00:13:42.631 bw ( KiB/s): min=15624, max=18712, per=26.46%, avg=17168.00, stdev=2183.55, samples=2 00:13:42.631 iops : min= 3906, max= 4678, avg=4292.00, stdev=545.89, samples=2 00:13:42.631 lat (msec) : 4=0.20%, 10=5.94%, 20=77.96%, 50=15.89%, 100=0.01% 00:13:42.631 cpu : usr=2.10%, sys=5.69%, ctx=297, majf=0, minf=1 00:13:42.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:42.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.631 issued rwts: total=4096,4419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.631 job2: (groupid=0, jobs=1): err= 0: pid=201196: Thu May 16 20:12:29 2024 00:13:42.631 read: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:42.631 slat (usec): min=3, max=22157, avg=136.94, stdev=981.91 00:13:42.631 clat (usec): min=1763, max=52100, avg=17467.20, stdev=7803.18 00:13:42.631 lat (usec): min=5369, max=52108, avg=17604.14, stdev=7874.89 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 5735], 5.00th=[10814], 10.00th=[11994], 20.00th=[12780], 00:13:42.631 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[15795], 00:13:42.631 | 70.00th=[18744], 80.00th=[20579], 90.00th=[28181], 95.00th=[34341], 00:13:42.631 | 99.00th=[46400], 99.50th=[49021], 99.90th=[52167], 99.95th=[52167], 00:13:42.631 | 99.99th=[52167] 00:13:42.631 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:13:42.631 slat (usec): min=4, max=19154, avg=132.13, stdev=914.61 00:13:42.631 clat (usec): min=2842, max=52088, avg=18057.91, stdev=6006.94 00:13:42.631 lat (usec): min=2847, max=52100, avg=18190.04, stdev=6072.33 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 7373], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:13:42.631 | 30.00th=[13173], 40.00th=[13960], 50.00th=[17695], 60.00th=[21627], 00:13:42.631 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23462], 95.00th=[30278], 00:13:42.631 | 99.00th=[33817], 99.50th=[33817], 99.90th=[42730], 99.95th=[52167], 00:13:42.631 | 99.99th=[52167] 00:13:42.631 bw ( KiB/s): min=12288, max=16384, per=22.09%, avg=14336.00, stdev=2896.31, samples=2 00:13:42.631 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:13:42.631 lat (msec) : 2=0.01%, 4=0.20%, 10=3.39%, 20=61.75%, 50=34.44% 00:13:42.631 lat (msec) : 100=0.21% 00:13:42.631 cpu : usr=4.89%, sys=7.29%, ctx=307, majf=0, minf=1 00:13:42.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:42.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.631 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.631 job3: (groupid=0, jobs=1): err= 0: pid=201199: Thu May 16 20:12:29 2024 00:13:42.631 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:42.631 slat (usec): min=2, max=13335, avg=135.53, stdev=826.38 00:13:42.631 clat (usec): min=5302, max=38050, avg=17090.99, stdev=4742.81 00:13:42.631 lat (usec): min=5307, max=38055, avg=17226.52, stdev=4785.32 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[10159], 5.00th=[12518], 10.00th=[13566], 20.00th=[14091], 00:13:42.631 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15270], 60.00th=[15795], 00:13:42.631 | 70.00th=[17433], 80.00th=[19268], 90.00th=[25297], 95.00th=[27657], 00:13:42.631 | 99.00th=[31065], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:13:42.631 | 99.99th=[38011] 00:13:42.631 write: IOPS=3718, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1002msec); 0 zone resets 00:13:42.631 slat (usec): min=3, max=12161, avg=131.06, stdev=719.17 00:13:42.631 clat (usec): min=484, max=53214, avg=17673.96, stdev=8405.47 00:13:42.631 lat (usec): min=2946, max=53221, avg=17805.02, stdev=8461.15 00:13:42.631 clat percentiles (usec): 00:13:42.631 | 1.00th=[ 4047], 5.00th=[ 9765], 10.00th=[12256], 20.00th=[13304], 00:13:42.631 | 30.00th=[13698], 40.00th=[14353], 50.00th=[14877], 60.00th=[15664], 00:13:42.631 | 70.00th=[16450], 80.00th=[20055], 90.00th=[29754], 95.00th=[38011], 00:13:42.631 | 99.00th=[47449], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:13:42.631 | 99.99th=[53216] 00:13:42.631 bw ( KiB/s): min=13792, max=15048, per=22.22%, avg=14420.00, stdev=888.13, samples=2 00:13:42.631 iops : min= 3448, max= 3762, avg=3605.00, stdev=222.03, samples=2 00:13:42.631 lat (usec) : 500=0.01% 00:13:42.631 lat (msec) : 4=0.30%, 10=3.35%, 20=77.43%, 50=18.70%, 100=0.21% 00:13:42.631 cpu : usr=3.79%, sys=4.89%, ctx=372, majf=0, minf=1 00:13:42.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:42.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.631 issued rwts: total=3584,3726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.631 00:13:42.631 Run status group 0 (all jobs): 00:13:42.631 READ: bw=61.5MiB/s (64.5MB/s), 13.9MiB/s-17.8MiB/s (14.6MB/s-18.7MB/s), io=61.9MiB (64.9MB), run=1002-1007msec 00:13:42.631 WRITE: bw=63.4MiB/s (66.5MB/s), 14.0MiB/s-17.9MiB/s (14.6MB/s-18.7MB/s), io=63.8MiB (66.9MB), run=1002-1007msec 00:13:42.631 00:13:42.631 Disk stats (read/write): 00:13:42.631 nvme0n1: ios=3633/3822, merge=0/0, ticks=53814/49601, in_queue=103415, util=89.58% 00:13:42.631 nvme0n2: ios=3447/3584, merge=0/0, ticks=23463/23227, in_queue=46690, util=92.58% 00:13:42.631 nvme0n3: ios=2666/3072, merge=0/0, ticks=34278/38529, in_queue=72807, util=96.45% 00:13:42.631 nvme0n4: ios=3048/3072, merge=0/0, ticks=33041/37510, in_queue=70551, util=93.78% 00:13:42.631 20:12:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:42.631 20:12:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=201352 00:13:42.631 20:12:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:42.631 20:12:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:42.631 [global] 00:13:42.631 thread=1 00:13:42.631 invalidate=1 00:13:42.631 rw=read 00:13:42.631 time_based=1 00:13:42.631 runtime=10 00:13:42.631 ioengine=libaio 00:13:42.631 direct=1 00:13:42.631 bs=4096 00:13:42.631 iodepth=1 00:13:42.631 norandommap=1 00:13:42.631 numjobs=1 00:13:42.631 00:13:42.631 [job0] 00:13:42.631 filename=/dev/nvme0n1 00:13:42.631 [job1] 00:13:42.631 filename=/dev/nvme0n2 00:13:42.631 [job2] 00:13:42.631 filename=/dev/nvme0n3 00:13:42.631 [job3] 00:13:42.631 filename=/dev/nvme0n4 00:13:42.631 Could not set queue depth (nvme0n1) 00:13:42.631 Could not set queue depth (nvme0n2) 00:13:42.631 Could not set queue depth (nvme0n3) 00:13:42.631 Could not set queue depth (nvme0n4) 00:13:42.631 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.631 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.631 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.631 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.631 fio-3.35 00:13:42.631 Starting 4 threads 00:13:45.936 20:12:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:45.936 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4730880, buflen=4096 00:13:45.936 fio: pid=201444, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:45.936 20:12:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:45.936 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23556096, buflen=4096 00:13:45.936 fio: pid=201443, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:45.936 20:12:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.936 20:12:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:46.194 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2678784, buflen=4096 00:13:46.194 fio: pid=201441, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:46.194 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.194 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:46.452 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=60641280, buflen=4096 00:13:46.452 fio: pid=201442, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:46.452 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.452 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:46.452 00:13:46.452 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=201441: Thu May 16 20:12:33 2024 00:13:46.453 read: IOPS=192, BW=769KiB/s (787kB/s)(2616KiB/3403msec) 00:13:46.453 slat (usec): min=4, max=5954, avg=18.15, stdev=236.75 00:13:46.453 clat (usec): min=176, max=42436, avg=5182.56, stdev=13318.50 00:13:46.453 lat (usec): min=181, max=43031, avg=5191.63, stdev=13326.26 00:13:46.453 clat percentiles (usec): 00:13:46.453 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:13:46.453 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:13:46.453 | 70.00th=[ 255], 80.00th=[ 293], 90.00th=[41157], 95.00th=[41157], 00:13:46.453 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:46.453 | 99.99th=[42206] 00:13:46.453 bw ( KiB/s): min= 96, max= 4336, per=3.30%, avg=808.00, stdev=1728.39, samples=6 00:13:46.453 iops : min= 24, max= 1084, avg=202.00, stdev=432.10, samples=6 00:13:46.453 lat (usec) : 250=67.79%, 500=19.69%, 750=0.15% 00:13:46.453 lat (msec) : 20=0.15%, 50=12.06% 00:13:46.453 cpu : usr=0.03%, sys=0.15%, ctx=658, majf=0, minf=1 00:13:46.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 issued rwts: total=655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.453 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=201442: Thu May 16 20:12:33 2024 00:13:46.453 read: IOPS=4046, BW=15.8MiB/s (16.6MB/s)(57.8MiB/3659msec) 00:13:46.453 slat (usec): min=4, max=12902, avg= 9.55, stdev=184.19 00:13:46.453 clat (usec): min=155, max=41086, avg=236.01, stdev=1003.93 00:13:46.453 lat (usec): min=163, max=53984, avg=245.57, stdev=1073.01 00:13:46.453 clat percentiles (usec): 00:13:46.453 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:13:46.453 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:13:46.453 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 273], 95.00th=[ 285], 00:13:46.453 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 515], 99.95th=[40633], 00:13:46.453 | 99.99th=[41157] 00:13:46.453 bw ( KiB/s): min=12552, max=19136, per=67.81%, avg=16580.57, stdev=2370.60, samples=7 00:13:46.453 iops : min= 3138, max= 4784, avg=4145.14, stdev=592.65, samples=7 00:13:46.453 lat (usec) : 250=84.73%, 500=15.16%, 750=0.03%, 1000=0.01% 00:13:46.453 lat (msec) : 2=0.01%, 50=0.06% 00:13:46.453 cpu : usr=1.67%, sys=3.85%, ctx=14814, majf=0, minf=1 00:13:46.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 issued rwts: total=14806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.453 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=201443: Thu May 16 20:12:33 2024 00:13:46.453 read: IOPS=1844, BW=7375KiB/s (7552kB/s)(22.5MiB/3119msec) 00:13:46.453 slat (usec): min=4, max=825, avg= 7.08, stdev=11.18 00:13:46.453 clat (usec): min=153, max=42098, avg=533.59, stdev=3455.83 00:13:46.453 lat (usec): min=179, max=42923, avg=540.67, stdev=3458.09 00:13:46.453 clat percentiles (usec): 00:13:46.453 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:13:46.453 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:13:46.453 | 70.00th=[ 245], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 379], 00:13:46.453 | 99.00th=[ 420], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:46.453 | 99.99th=[42206] 00:13:46.453 bw ( KiB/s): min= 96, max=17008, per=30.58%, avg=7477.33, stdev=7323.60, samples=6 00:13:46.453 iops : min= 24, max= 4252, avg=1869.33, stdev=1830.90, samples=6 00:13:46.453 lat (usec) : 250=72.60%, 500=26.62%, 750=0.03% 00:13:46.453 lat (msec) : 20=0.02%, 50=0.71% 00:13:46.453 cpu : usr=0.74%, sys=1.41%, ctx=5754, majf=0, minf=1 00:13:46.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 issued rwts: total=5752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.453 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=201444: Thu May 16 20:12:33 2024 00:13:46.453 read: IOPS=402, BW=1610KiB/s (1649kB/s)(4620KiB/2869msec) 00:13:46.453 slat (nsec): min=5707, max=47551, avg=7005.21, stdev=2444.25 00:13:46.453 clat (usec): min=201, max=42055, avg=2474.74, stdev=9239.43 00:13:46.453 lat (usec): min=207, max=42067, avg=2481.74, stdev=9240.81 00:13:46.453 clat percentiles (usec): 00:13:46.453 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 237], 20.00th=[ 253], 00:13:46.453 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:13:46.453 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[41157], 00:13:46.453 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:46.453 | 99.99th=[42206] 00:13:46.453 bw ( KiB/s): min= 96, max= 8776, per=7.50%, avg=1833.60, stdev=3880.92, samples=5 00:13:46.453 iops : min= 24, max= 2194, avg=458.40, stdev=970.23, samples=5 00:13:46.453 lat (usec) : 250=17.56%, 500=76.47%, 750=0.17% 00:13:46.453 lat (msec) : 2=0.17%, 4=0.09%, 10=0.09%, 50=5.36% 00:13:46.453 cpu : usr=0.07%, sys=0.49%, ctx=1157, majf=0, minf=1 00:13:46.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.453 issued rwts: total=1156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.453 00:13:46.453 Run status group 0 (all jobs): 00:13:46.453 READ: bw=23.9MiB/s (25.0MB/s), 769KiB/s-15.8MiB/s (787kB/s-16.6MB/s), io=87.4MiB (91.6MB), run=2869-3659msec 00:13:46.453 00:13:46.453 Disk stats (read/write): 00:13:46.453 nvme0n1: ios=657/0, merge=0/0, ticks=4366/0, in_queue=4366, util=99.46% 00:13:46.453 nvme0n2: ios=14845/0, merge=0/0, ticks=4403/0, in_queue=4403, util=98.39% 00:13:46.453 nvme0n3: ios=5661/0, merge=0/0, ticks=3009/0, in_queue=3009, util=96.72% 00:13:46.453 nvme0n4: ios=1199/0, merge=0/0, ticks=3775/0, in_queue=3775, util=99.19% 00:13:46.710 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.710 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:46.968 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.968 20:12:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:47.226 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:47.226 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:47.484 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:47.484 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 201352 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:47.742 nvmf hotplug test: fio failed as expected 00:13:47.742 20:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.000 rmmod nvme_tcp 00:13:48.000 rmmod nvme_fabrics 00:13:48.000 rmmod nvme_keyring 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 199342 ']' 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 199342 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 199342 ']' 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 199342 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 199342 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 199342' 00:13:48.000 killing process with pid 199342 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 199342 00:13:48.000 [2024-05-16 20:12:35.105391] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:48.000 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 199342 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.258 20:12:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.789 20:12:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.789 00:13:50.789 real 0m23.228s 00:13:50.789 user 1m20.124s 00:13:50.789 sys 0m7.105s 00:13:50.789 20:12:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:50.789 20:12:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.789 ************************************ 00:13:50.789 END TEST nvmf_fio_target 00:13:50.789 ************************************ 00:13:50.789 20:12:37 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:50.789 20:12:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:50.789 20:12:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:50.789 20:12:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.789 ************************************ 00:13:50.789 START TEST nvmf_bdevio 00:13:50.789 ************************************ 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:50.789 * Looking for test storage... 00:13:50.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.789 20:12:37 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.790 20:12:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.165 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:52.166 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:52.166 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:52.166 Found net devices under 0000:09:00.0: cvl_0_0 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:52.166 Found net devices under 0000:09:00.1: cvl_0_1 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.166 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:52.425 00:13:52.425 --- 10.0.0.2 ping statistics --- 00:13:52.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.425 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:13:52.425 00:13:52.425 --- 10.0.0.1 ping statistics --- 00:13:52.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.425 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=204056 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 204056 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 204056 ']' 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.425 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.425 [2024-05-16 20:12:39.402281] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:52.425 [2024-05-16 20:12:39.402366] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.425 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.425 [2024-05-16 20:12:39.467201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.684 [2024-05-16 20:12:39.574986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.684 [2024-05-16 20:12:39.575052] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.684 [2024-05-16 20:12:39.575082] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.684 [2024-05-16 20:12:39.575094] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.684 [2024-05-16 20:12:39.575105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.684 [2024-05-16 20:12:39.575187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.684 [2024-05-16 20:12:39.575246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:52.684 [2024-05-16 20:12:39.575305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:52.684 [2024-05-16 20:12:39.575309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.684 [2024-05-16 20:12:39.717400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.684 Malloc0 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.684 [2024-05-16 20:12:39.767623] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:52.684 [2024-05-16 20:12:39.767957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.684 { 00:13:52.684 "params": { 00:13:52.684 "name": "Nvme$subsystem", 00:13:52.684 "trtype": "$TEST_TRANSPORT", 00:13:52.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.684 "adrfam": "ipv4", 00:13:52.684 "trsvcid": "$NVMF_PORT", 00:13:52.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.684 "hdgst": ${hdgst:-false}, 00:13:52.684 "ddgst": ${ddgst:-false} 00:13:52.684 }, 00:13:52.684 "method": "bdev_nvme_attach_controller" 00:13:52.684 } 00:13:52.684 EOF 00:13:52.684 )") 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:52.684 20:12:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.684 "params": { 00:13:52.684 "name": "Nvme1", 00:13:52.685 "trtype": "tcp", 00:13:52.685 "traddr": "10.0.0.2", 00:13:52.685 "adrfam": "ipv4", 00:13:52.685 "trsvcid": "4420", 00:13:52.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.685 "hdgst": false, 00:13:52.685 "ddgst": false 00:13:52.685 }, 00:13:52.685 "method": "bdev_nvme_attach_controller" 00:13:52.685 }' 00:13:52.685 [2024-05-16 20:12:39.813268] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:52.685 [2024-05-16 20:12:39.813352] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204105 ] 00:13:52.943 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.943 [2024-05-16 20:12:39.879565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.943 [2024-05-16 20:12:39.991734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.943 [2024-05-16 20:12:39.993874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.943 [2024-05-16 20:12:39.993886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.201 I/O targets: 00:13:53.201 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:53.201 00:13:53.201 00:13:53.201 CUnit - A unit testing framework for C - Version 2.1-3 00:13:53.201 http://cunit.sourceforge.net/ 00:13:53.201 00:13:53.201 00:13:53.201 Suite: bdevio tests on: Nvme1n1 00:13:53.201 Test: blockdev write read block ...passed 00:13:53.201 Test: blockdev write zeroes read block ...passed 00:13:53.201 Test: blockdev write zeroes read no split ...passed 00:13:53.458 Test: blockdev write zeroes read split ...passed 00:13:53.458 Test: blockdev write zeroes read split partial ...passed 00:13:53.458 Test: blockdev reset ...[2024-05-16 20:12:40.404702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:53.458 [2024-05-16 20:12:40.404808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd2b00 (9): Bad file descriptor 00:13:53.458 [2024-05-16 20:12:40.419827] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:53.458 passed 00:13:53.458 Test: blockdev write read 8 blocks ...passed 00:13:53.458 Test: blockdev write read size > 128k ...passed 00:13:53.458 Test: blockdev write read invalid size ...passed 00:13:53.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:53.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:53.458 Test: blockdev write read max offset ...passed 00:13:53.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:53.458 Test: blockdev writev readv 8 blocks ...passed 00:13:53.458 Test: blockdev writev readv 30 x 1block ...passed 00:13:53.716 Test: blockdev writev readv block ...passed 00:13:53.716 Test: blockdev writev readv size > 128k ...passed 00:13:53.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:53.716 Test: blockdev comparev and writev ...[2024-05-16 20:12:40.633795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.716 [2024-05-16 20:12:40.633832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:53.716 [2024-05-16 20:12:40.633886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.716 [2024-05-16 20:12:40.633915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:53.716 [2024-05-16 20:12:40.634272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.716 [2024-05-16 20:12:40.634299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.634334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.717 [2024-05-16 20:12:40.634361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.634717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.717 [2024-05-16 20:12:40.634744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.634778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.717 [2024-05-16 20:12:40.634805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.635170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.717 [2024-05-16 20:12:40.635206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.635241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.717 [2024-05-16 20:12:40.635268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:53.717 passed 00:13:53.717 Test: blockdev nvme passthru rw ...passed 00:13:53.717 Test: blockdev nvme passthru vendor specific ...[2024-05-16 20:12:40.718113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.717 [2024-05-16 20:12:40.718149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.718306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.717 [2024-05-16 20:12:40.718333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.718496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.717 [2024-05-16 20:12:40.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:53.717 [2024-05-16 20:12:40.718676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.717 [2024-05-16 20:12:40.718701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:53.717 passed 00:13:53.717 Test: blockdev nvme admin passthru ...passed 00:13:53.717 Test: blockdev copy ...passed 00:13:53.717 00:13:53.717 Run Summary: Type Total Ran Passed Failed Inactive 00:13:53.717 suites 1 1 n/a 0 0 00:13:53.717 tests 23 23 23 0 0 00:13:53.717 asserts 152 152 152 0 n/a 00:13:53.717 00:13:53.717 Elapsed time = 1.136 seconds 00:13:53.974 20:12:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.974 20:12:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.974 20:12:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.974 rmmod nvme_tcp 00:13:53.974 rmmod nvme_fabrics 00:13:53.974 rmmod nvme_keyring 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:53.974 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 204056 ']' 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 204056 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 204056 ']' 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 204056 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 204056 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 204056' 00:13:53.975 killing process with pid 204056 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 204056 00:13:53.975 [2024-05-16 20:12:41.070417] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:53.975 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 204056 00:13:54.232 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.233 20:12:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.761 20:12:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.761 00:13:56.761 real 0m5.940s 00:13:56.761 user 0m9.722s 00:13:56.761 sys 0m1.790s 00:13:56.761 20:12:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:56.761 20:12:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:56.761 ************************************ 00:13:56.761 END TEST nvmf_bdevio 00:13:56.761 ************************************ 00:13:56.761 20:12:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:56.761 20:12:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:56.761 20:12:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.761 20:12:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.761 ************************************ 00:13:56.761 START TEST nvmf_auth_target 00:13:56.761 ************************************ 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:56.761 * Looking for test storage... 00:13:56.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:56.761 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.762 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.662 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:58.663 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:58.663 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:58.663 Found net devices under 0000:09:00.0: cvl_0_0 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:58.663 Found net devices under 0000:09:00.1: cvl_0_1 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:13:58.663 00:13:58.663 --- 10.0.0.2 ping statistics --- 00:13:58.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.663 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:13:58.663 00:13:58.663 --- 10.0.0.1 ping statistics --- 00:13:58.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.663 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=206280 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 206280 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 206280 ']' 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.663 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=206299 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f43425519f96f12f2feb6cf7962fa1b0f5bec38464513f13 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gvz 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f43425519f96f12f2feb6cf7962fa1b0f5bec38464513f13 0 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f43425519f96f12f2feb6cf7962fa1b0f5bec38464513f13 0 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f43425519f96f12f2feb6cf7962fa1b0f5bec38464513f13 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gvz 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gvz 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.gvz 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40a18893ee555d2999d891e8e2b7a0d873aa4373b143e5616307fe9a12490624 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.O0y 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40a18893ee555d2999d891e8e2b7a0d873aa4373b143e5616307fe9a12490624 3 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40a18893ee555d2999d891e8e2b7a0d873aa4373b143e5616307fe9a12490624 3 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40a18893ee555d2999d891e8e2b7a0d873aa4373b143e5616307fe9a12490624 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.O0y 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.O0y 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.O0y 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5e0d6fc9250d6dbe67cfc0208574b99f 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.93h 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5e0d6fc9250d6dbe67cfc0208574b99f 1 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5e0d6fc9250d6dbe67cfc0208574b99f 1 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5e0d6fc9250d6dbe67cfc0208574b99f 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:58.922 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.93h 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.93h 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.93h 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bea68d55c4f5be0d78e95923eb0315d583eb2de52df98dde 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eFX 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bea68d55c4f5be0d78e95923eb0315d583eb2de52df98dde 2 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bea68d55c4f5be0d78e95923eb0315d583eb2de52df98dde 2 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bea68d55c4f5be0d78e95923eb0315d583eb2de52df98dde 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eFX 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eFX 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.eFX 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:58.922 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:59.180 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:59.180 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40afb1fcffe22cdbdb8cd391e226c4db6471303977f0ce07 00:13:59.180 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:59.180 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Vfw 00:13:59.180 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40afb1fcffe22cdbdb8cd391e226c4db6471303977f0ce07 2 00:13:59.180 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40afb1fcffe22cdbdb8cd391e226c4db6471303977f0ce07 2 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40afb1fcffe22cdbdb8cd391e226c4db6471303977f0ce07 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Vfw 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Vfw 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Vfw 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c8e0d3f16851e29054239122ad4a1198 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8j3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c8e0d3f16851e29054239122ad4a1198 1 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c8e0d3f16851e29054239122ad4a1198 1 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c8e0d3f16851e29054239122ad4a1198 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8j3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8j3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.8j3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d9c1ca262c14a615cdeb0a314635f350e63118825828883ad810ac0ec789453b 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zfg 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d9c1ca262c14a615cdeb0a314635f350e63118825828883ad810ac0ec789453b 3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d9c1ca262c14a615cdeb0a314635f350e63118825828883ad810ac0ec789453b 3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d9c1ca262c14a615cdeb0a314635f350e63118825828883ad810ac0ec789453b 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zfg 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zfg 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.zfg 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 206280 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 206280 ']' 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.181 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 206299 /var/tmp/host.sock 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 206299 ']' 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:59.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.440 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gvz 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.699 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gvz 00:13:59.699 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gvz 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.O0y ]] 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O0y 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O0y 00:13:59.956 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O0y 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.93h 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.93h 00:14:00.213 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.93h 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.eFX ]] 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eFX 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eFX 00:14:00.470 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eFX 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Vfw 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Vfw 00:14:00.728 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Vfw 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.8j3 ]] 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8j3 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8j3 00:14:00.985 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8j3 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zfg 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zfg 00:14:01.242 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zfg 00:14:01.500 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:01.500 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:01.500 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.500 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.500 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.500 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.757 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.015 00:14:02.015 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.015 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.015 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.273 { 00:14:02.273 "cntlid": 1, 00:14:02.273 "qid": 0, 00:14:02.273 "state": "enabled", 00:14:02.273 "listen_address": { 00:14:02.273 "trtype": "TCP", 00:14:02.273 "adrfam": "IPv4", 00:14:02.273 "traddr": "10.0.0.2", 00:14:02.273 "trsvcid": "4420" 00:14:02.273 }, 00:14:02.273 "peer_address": { 00:14:02.273 "trtype": "TCP", 00:14:02.273 "adrfam": "IPv4", 00:14:02.273 "traddr": "10.0.0.1", 00:14:02.273 "trsvcid": "52522" 00:14:02.273 }, 00:14:02.273 "auth": { 00:14:02.273 "state": "completed", 00:14:02.273 "digest": "sha256", 00:14:02.273 "dhgroup": "null" 00:14:02.273 } 00:14:02.273 } 00:14:02.273 ]' 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:02.273 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.531 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.531 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.531 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.788 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.046 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.046 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.046 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.046 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.046 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.046 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.046 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.046 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.046 { 00:14:08.046 "cntlid": 3, 00:14:08.046 "qid": 0, 00:14:08.046 "state": "enabled", 00:14:08.046 "listen_address": { 00:14:08.046 "trtype": "TCP", 00:14:08.046 "adrfam": "IPv4", 00:14:08.046 "traddr": "10.0.0.2", 00:14:08.046 "trsvcid": "4420" 00:14:08.046 }, 00:14:08.046 "peer_address": { 00:14:08.046 "trtype": "TCP", 00:14:08.046 "adrfam": "IPv4", 00:14:08.046 "traddr": "10.0.0.1", 00:14:08.046 "trsvcid": "41134" 00:14:08.046 }, 00:14:08.047 "auth": { 00:14:08.047 "state": "completed", 00:14:08.047 "digest": "sha256", 00:14:08.047 "dhgroup": "null" 00:14:08.047 } 00:14:08.047 } 00:14:08.047 ]' 00:14:08.047 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.047 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.047 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.047 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:08.047 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.304 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.304 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.304 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.562 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.494 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.784 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.041 00:14:10.041 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.041 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.041 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.299 { 00:14:10.299 "cntlid": 5, 00:14:10.299 "qid": 0, 00:14:10.299 "state": "enabled", 00:14:10.299 "listen_address": { 00:14:10.299 "trtype": "TCP", 00:14:10.299 "adrfam": "IPv4", 00:14:10.299 "traddr": "10.0.0.2", 00:14:10.299 "trsvcid": "4420" 00:14:10.299 }, 00:14:10.299 "peer_address": { 00:14:10.299 "trtype": "TCP", 00:14:10.299 "adrfam": "IPv4", 00:14:10.299 "traddr": "10.0.0.1", 00:14:10.299 "trsvcid": "41174" 00:14:10.299 }, 00:14:10.299 "auth": { 00:14:10.299 "state": "completed", 00:14:10.299 "digest": "sha256", 00:14:10.299 "dhgroup": "null" 00:14:10.299 } 00:14:10.299 } 00:14:10.299 ]' 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.299 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.556 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:11.489 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.746 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.004 00:14:12.004 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.004 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.004 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.262 { 00:14:12.262 "cntlid": 7, 00:14:12.262 "qid": 0, 00:14:12.262 "state": "enabled", 00:14:12.262 "listen_address": { 00:14:12.262 "trtype": "TCP", 00:14:12.262 "adrfam": "IPv4", 00:14:12.262 "traddr": "10.0.0.2", 00:14:12.262 "trsvcid": "4420" 00:14:12.262 }, 00:14:12.262 "peer_address": { 00:14:12.262 "trtype": "TCP", 00:14:12.262 "adrfam": "IPv4", 00:14:12.262 "traddr": "10.0.0.1", 00:14:12.262 "trsvcid": "41190" 00:14:12.262 }, 00:14:12.262 "auth": { 00:14:12.262 "state": "completed", 00:14:12.262 "digest": "sha256", 00:14:12.262 "dhgroup": "null" 00:14:12.262 } 00:14:12.262 } 00:14:12.262 ]' 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.262 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.519 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:12.519 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.519 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.519 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.519 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.776 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.708 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.966 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.223 00:14:14.223 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.223 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.223 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.481 { 00:14:14.481 "cntlid": 9, 00:14:14.481 "qid": 0, 00:14:14.481 "state": "enabled", 00:14:14.481 "listen_address": { 00:14:14.481 "trtype": "TCP", 00:14:14.481 "adrfam": "IPv4", 00:14:14.481 "traddr": "10.0.0.2", 00:14:14.481 "trsvcid": "4420" 00:14:14.481 }, 00:14:14.481 "peer_address": { 00:14:14.481 "trtype": "TCP", 00:14:14.481 "adrfam": "IPv4", 00:14:14.481 "traddr": "10.0.0.1", 00:14:14.481 "trsvcid": "41226" 00:14:14.481 }, 00:14:14.481 "auth": { 00:14:14.481 "state": "completed", 00:14:14.481 "digest": "sha256", 00:14:14.481 "dhgroup": "ffdhe2048" 00:14:14.481 } 00:14:14.481 } 00:14:14.481 ]' 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.481 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.739 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:14:15.671 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.672 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.929 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.186 00:14:16.186 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.186 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.187 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.444 { 00:14:16.444 "cntlid": 11, 00:14:16.444 "qid": 0, 00:14:16.444 "state": "enabled", 00:14:16.444 "listen_address": { 00:14:16.444 "trtype": "TCP", 00:14:16.444 "adrfam": "IPv4", 00:14:16.444 "traddr": "10.0.0.2", 00:14:16.444 "trsvcid": "4420" 00:14:16.444 }, 00:14:16.444 "peer_address": { 00:14:16.444 "trtype": "TCP", 00:14:16.444 "adrfam": "IPv4", 00:14:16.444 "traddr": "10.0.0.1", 00:14:16.444 "trsvcid": "41254" 00:14:16.444 }, 00:14:16.444 "auth": { 00:14:16.444 "state": "completed", 00:14:16.444 "digest": "sha256", 00:14:16.444 "dhgroup": "ffdhe2048" 00:14:16.444 } 00:14:16.444 } 00:14:16.444 ]' 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.444 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.700 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.700 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.700 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.956 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:17.890 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.147 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.148 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.148 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.148 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.148 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.148 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.405 00:14:18.405 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.405 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.405 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.663 { 00:14:18.663 "cntlid": 13, 00:14:18.663 "qid": 0, 00:14:18.663 "state": "enabled", 00:14:18.663 "listen_address": { 00:14:18.663 "trtype": "TCP", 00:14:18.663 "adrfam": "IPv4", 00:14:18.663 "traddr": "10.0.0.2", 00:14:18.663 "trsvcid": "4420" 00:14:18.663 }, 00:14:18.663 "peer_address": { 00:14:18.663 "trtype": "TCP", 00:14:18.663 "adrfam": "IPv4", 00:14:18.663 "traddr": "10.0.0.1", 00:14:18.663 "trsvcid": "55842" 00:14:18.663 }, 00:14:18.663 "auth": { 00:14:18.663 "state": "completed", 00:14:18.663 "digest": "sha256", 00:14:18.663 "dhgroup": "ffdhe2048" 00:14:18.663 } 00:14:18.663 } 00:14:18.663 ]' 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.663 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.920 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.852 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.109 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.110 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.110 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.674 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.674 { 00:14:20.674 "cntlid": 15, 00:14:20.674 "qid": 0, 00:14:20.674 "state": "enabled", 00:14:20.674 "listen_address": { 00:14:20.674 "trtype": "TCP", 00:14:20.674 "adrfam": "IPv4", 00:14:20.674 "traddr": "10.0.0.2", 00:14:20.674 "trsvcid": "4420" 00:14:20.674 }, 00:14:20.674 "peer_address": { 00:14:20.674 "trtype": "TCP", 00:14:20.674 "adrfam": "IPv4", 00:14:20.674 "traddr": "10.0.0.1", 00:14:20.674 "trsvcid": "55874" 00:14:20.674 }, 00:14:20.674 "auth": { 00:14:20.674 "state": "completed", 00:14:20.674 "digest": "sha256", 00:14:20.674 "dhgroup": "ffdhe2048" 00:14:20.674 } 00:14:20.674 } 00:14:20.674 ]' 00:14:20.674 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.932 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.190 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.122 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.379 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.636 00:14:22.636 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.636 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.636 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.894 { 00:14:22.894 "cntlid": 17, 00:14:22.894 "qid": 0, 00:14:22.894 "state": "enabled", 00:14:22.894 "listen_address": { 00:14:22.894 "trtype": "TCP", 00:14:22.894 "adrfam": "IPv4", 00:14:22.894 "traddr": "10.0.0.2", 00:14:22.894 "trsvcid": "4420" 00:14:22.894 }, 00:14:22.894 "peer_address": { 00:14:22.894 "trtype": "TCP", 00:14:22.894 "adrfam": "IPv4", 00:14:22.894 "traddr": "10.0.0.1", 00:14:22.894 "trsvcid": "55900" 00:14:22.894 }, 00:14:22.894 "auth": { 00:14:22.894 "state": "completed", 00:14:22.894 "digest": "sha256", 00:14:22.894 "dhgroup": "ffdhe3072" 00:14:22.894 } 00:14:22.894 } 00:14:22.894 ]' 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.894 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.151 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:24.083 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:24.340 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.341 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.905 00:14:24.905 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.905 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.905 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.905 { 00:14:24.905 "cntlid": 19, 00:14:24.905 "qid": 0, 00:14:24.905 "state": "enabled", 00:14:24.905 "listen_address": { 00:14:24.905 "trtype": "TCP", 00:14:24.905 "adrfam": "IPv4", 00:14:24.905 "traddr": "10.0.0.2", 00:14:24.905 "trsvcid": "4420" 00:14:24.905 }, 00:14:24.905 "peer_address": { 00:14:24.905 "trtype": "TCP", 00:14:24.905 "adrfam": "IPv4", 00:14:24.905 "traddr": "10.0.0.1", 00:14:24.905 "trsvcid": "55930" 00:14:24.905 }, 00:14:24.905 "auth": { 00:14:24.905 "state": "completed", 00:14:24.905 "digest": "sha256", 00:14:24.905 "dhgroup": "ffdhe3072" 00:14:24.905 } 00:14:24.905 } 00:14:24.905 ]' 00:14:24.905 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.163 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.421 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:26.355 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.612 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.870 00:14:26.870 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.870 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.870 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.126 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.126 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.127 { 00:14:27.127 "cntlid": 21, 00:14:27.127 "qid": 0, 00:14:27.127 "state": "enabled", 00:14:27.127 "listen_address": { 00:14:27.127 "trtype": "TCP", 00:14:27.127 "adrfam": "IPv4", 00:14:27.127 "traddr": "10.0.0.2", 00:14:27.127 "trsvcid": "4420" 00:14:27.127 }, 00:14:27.127 "peer_address": { 00:14:27.127 "trtype": "TCP", 00:14:27.127 "adrfam": "IPv4", 00:14:27.127 "traddr": "10.0.0.1", 00:14:27.127 "trsvcid": "55950" 00:14:27.127 }, 00:14:27.127 "auth": { 00:14:27.127 "state": "completed", 00:14:27.127 "digest": "sha256", 00:14:27.127 "dhgroup": "ffdhe3072" 00:14:27.127 } 00:14:27.127 } 00:14:27.127 ]' 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:27.127 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.383 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.383 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.383 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.640 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:14:28.573 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.573 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:28.573 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.573 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.574 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.136 00:14:29.137 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.137 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.137 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.393 { 00:14:29.393 "cntlid": 23, 00:14:29.393 "qid": 0, 00:14:29.393 "state": "enabled", 00:14:29.393 "listen_address": { 00:14:29.393 "trtype": "TCP", 00:14:29.393 "adrfam": "IPv4", 00:14:29.393 "traddr": "10.0.0.2", 00:14:29.393 "trsvcid": "4420" 00:14:29.393 }, 00:14:29.393 "peer_address": { 00:14:29.393 "trtype": "TCP", 00:14:29.393 "adrfam": "IPv4", 00:14:29.393 "traddr": "10.0.0.1", 00:14:29.393 "trsvcid": "42810" 00:14:29.393 }, 00:14:29.393 "auth": { 00:14:29.393 "state": "completed", 00:14:29.393 "digest": "sha256", 00:14:29.393 "dhgroup": "ffdhe3072" 00:14:29.393 } 00:14:29.393 } 00:14:29.393 ]' 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.393 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.394 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.651 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.582 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.839 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.095 00:14:31.095 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.095 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.096 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.352 { 00:14:31.352 "cntlid": 25, 00:14:31.352 "qid": 0, 00:14:31.352 "state": "enabled", 00:14:31.352 "listen_address": { 00:14:31.352 "trtype": "TCP", 00:14:31.352 "adrfam": "IPv4", 00:14:31.352 "traddr": "10.0.0.2", 00:14:31.352 "trsvcid": "4420" 00:14:31.352 }, 00:14:31.352 "peer_address": { 00:14:31.352 "trtype": "TCP", 00:14:31.352 "adrfam": "IPv4", 00:14:31.352 "traddr": "10.0.0.1", 00:14:31.352 "trsvcid": "42836" 00:14:31.352 }, 00:14:31.352 "auth": { 00:14:31.352 "state": "completed", 00:14:31.352 "digest": "sha256", 00:14:31.352 "dhgroup": "ffdhe4096" 00:14:31.352 } 00:14:31.352 } 00:14:31.352 ]' 00:14:31.352 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.609 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.866 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.798 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.057 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.314 00:14:33.314 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.314 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.314 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.572 { 00:14:33.572 "cntlid": 27, 00:14:33.572 "qid": 0, 00:14:33.572 "state": "enabled", 00:14:33.572 "listen_address": { 00:14:33.572 "trtype": "TCP", 00:14:33.572 "adrfam": "IPv4", 00:14:33.572 "traddr": "10.0.0.2", 00:14:33.572 "trsvcid": "4420" 00:14:33.572 }, 00:14:33.572 "peer_address": { 00:14:33.572 "trtype": "TCP", 00:14:33.572 "adrfam": "IPv4", 00:14:33.572 "traddr": "10.0.0.1", 00:14:33.572 "trsvcid": "42870" 00:14:33.572 }, 00:14:33.572 "auth": { 00:14:33.572 "state": "completed", 00:14:33.572 "digest": "sha256", 00:14:33.572 "dhgroup": "ffdhe4096" 00:14:33.572 } 00:14:33.572 } 00:14:33.572 ]' 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.572 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.830 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.830 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.830 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.830 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.830 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.087 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.018 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.275 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.531 00:14:35.531 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.531 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.531 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.788 { 00:14:35.788 "cntlid": 29, 00:14:35.788 "qid": 0, 00:14:35.788 "state": "enabled", 00:14:35.788 "listen_address": { 00:14:35.788 "trtype": "TCP", 00:14:35.788 "adrfam": "IPv4", 00:14:35.788 "traddr": "10.0.0.2", 00:14:35.788 "trsvcid": "4420" 00:14:35.788 }, 00:14:35.788 "peer_address": { 00:14:35.788 "trtype": "TCP", 00:14:35.788 "adrfam": "IPv4", 00:14:35.788 "traddr": "10.0.0.1", 00:14:35.788 "trsvcid": "42912" 00:14:35.788 }, 00:14:35.788 "auth": { 00:14:35.788 "state": "completed", 00:14:35.788 "digest": "sha256", 00:14:35.788 "dhgroup": "ffdhe4096" 00:14:35.788 } 00:14:35.788 } 00:14:35.788 ]' 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.788 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.045 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.045 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.045 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.302 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:14:37.231 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.232 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.488 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.745 00:14:37.746 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.746 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.746 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.002 { 00:14:38.002 "cntlid": 31, 00:14:38.002 "qid": 0, 00:14:38.002 "state": "enabled", 00:14:38.002 "listen_address": { 00:14:38.002 "trtype": "TCP", 00:14:38.002 "adrfam": "IPv4", 00:14:38.002 "traddr": "10.0.0.2", 00:14:38.002 "trsvcid": "4420" 00:14:38.002 }, 00:14:38.002 "peer_address": { 00:14:38.002 "trtype": "TCP", 00:14:38.002 "adrfam": "IPv4", 00:14:38.002 "traddr": "10.0.0.1", 00:14:38.002 "trsvcid": "60634" 00:14:38.002 }, 00:14:38.002 "auth": { 00:14:38.002 "state": "completed", 00:14:38.002 "digest": "sha256", 00:14:38.002 "dhgroup": "ffdhe4096" 00:14:38.002 } 00:14:38.002 } 00:14:38.002 ]' 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.002 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.259 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.259 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.259 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.259 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:14:39.190 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:39.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:39.704 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.704 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.704 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.705 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.705 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.705 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.705 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.270 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.270 { 00:14:40.270 "cntlid": 33, 00:14:40.270 "qid": 0, 00:14:40.270 "state": "enabled", 00:14:40.270 "listen_address": { 00:14:40.270 "trtype": "TCP", 00:14:40.270 "adrfam": "IPv4", 00:14:40.270 "traddr": "10.0.0.2", 00:14:40.270 "trsvcid": "4420" 00:14:40.270 }, 00:14:40.270 "peer_address": { 00:14:40.270 "trtype": "TCP", 00:14:40.270 "adrfam": "IPv4", 00:14:40.270 "traddr": "10.0.0.1", 00:14:40.270 "trsvcid": "60672" 00:14:40.270 }, 00:14:40.270 "auth": { 00:14:40.270 "state": "completed", 00:14:40.270 "digest": "sha256", 00:14:40.270 "dhgroup": "ffdhe6144" 00:14:40.270 } 00:14:40.270 } 00:14:40.270 ]' 00:14:40.270 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.528 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.786 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.717 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.974 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.538 00:14:42.538 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.538 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.538 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.795 { 00:14:42.795 "cntlid": 35, 00:14:42.795 "qid": 0, 00:14:42.795 "state": "enabled", 00:14:42.795 "listen_address": { 00:14:42.795 "trtype": "TCP", 00:14:42.795 "adrfam": "IPv4", 00:14:42.795 "traddr": "10.0.0.2", 00:14:42.795 "trsvcid": "4420" 00:14:42.795 }, 00:14:42.795 "peer_address": { 00:14:42.795 "trtype": "TCP", 00:14:42.795 "adrfam": "IPv4", 00:14:42.795 "traddr": "10.0.0.1", 00:14:42.795 "trsvcid": "60704" 00:14:42.795 }, 00:14:42.795 "auth": { 00:14:42.795 "state": "completed", 00:14:42.795 "digest": "sha256", 00:14:42.795 "dhgroup": "ffdhe6144" 00:14:42.795 } 00:14:42.795 } 00:14:42.795 ]' 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.795 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.051 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.982 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.239 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.803 00:14:44.803 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.803 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.803 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.060 { 00:14:45.060 "cntlid": 37, 00:14:45.060 "qid": 0, 00:14:45.060 "state": "enabled", 00:14:45.060 "listen_address": { 00:14:45.060 "trtype": "TCP", 00:14:45.060 "adrfam": "IPv4", 00:14:45.060 "traddr": "10.0.0.2", 00:14:45.060 "trsvcid": "4420" 00:14:45.060 }, 00:14:45.060 "peer_address": { 00:14:45.060 "trtype": "TCP", 00:14:45.060 "adrfam": "IPv4", 00:14:45.060 "traddr": "10.0.0.1", 00:14:45.060 "trsvcid": "60738" 00:14:45.060 }, 00:14:45.060 "auth": { 00:14:45.060 "state": "completed", 00:14:45.060 "digest": "sha256", 00:14:45.060 "dhgroup": "ffdhe6144" 00:14:45.060 } 00:14:45.060 } 00:14:45.060 ]' 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:45.060 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.317 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.317 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.317 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.574 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.504 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.760 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.324 00:14:47.324 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.324 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.324 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.325 { 00:14:47.325 "cntlid": 39, 00:14:47.325 "qid": 0, 00:14:47.325 "state": "enabled", 00:14:47.325 "listen_address": { 00:14:47.325 "trtype": "TCP", 00:14:47.325 "adrfam": "IPv4", 00:14:47.325 "traddr": "10.0.0.2", 00:14:47.325 "trsvcid": "4420" 00:14:47.325 }, 00:14:47.325 "peer_address": { 00:14:47.325 "trtype": "TCP", 00:14:47.325 "adrfam": "IPv4", 00:14:47.325 "traddr": "10.0.0.1", 00:14:47.325 "trsvcid": "60780" 00:14:47.325 }, 00:14:47.325 "auth": { 00:14:47.325 "state": "completed", 00:14:47.325 "digest": "sha256", 00:14:47.325 "dhgroup": "ffdhe6144" 00:14:47.325 } 00:14:47.325 } 00:14:47.325 ]' 00:14:47.325 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.581 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.838 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.769 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.026 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.957 00:14:49.957 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.957 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.957 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.214 { 00:14:50.214 "cntlid": 41, 00:14:50.214 "qid": 0, 00:14:50.214 "state": "enabled", 00:14:50.214 "listen_address": { 00:14:50.214 "trtype": "TCP", 00:14:50.214 "adrfam": "IPv4", 00:14:50.214 "traddr": "10.0.0.2", 00:14:50.214 "trsvcid": "4420" 00:14:50.214 }, 00:14:50.214 "peer_address": { 00:14:50.214 "trtype": "TCP", 00:14:50.214 "adrfam": "IPv4", 00:14:50.214 "traddr": "10.0.0.1", 00:14:50.214 "trsvcid": "37960" 00:14:50.214 }, 00:14:50.214 "auth": { 00:14:50.214 "state": "completed", 00:14:50.214 "digest": "sha256", 00:14:50.214 "dhgroup": "ffdhe8192" 00:14:50.214 } 00:14:50.214 } 00:14:50.214 ]' 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.214 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.471 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:14:51.403 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.404 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.661 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.594 00:14:52.594 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.594 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.594 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.851 { 00:14:52.851 "cntlid": 43, 00:14:52.851 "qid": 0, 00:14:52.851 "state": "enabled", 00:14:52.851 "listen_address": { 00:14:52.851 "trtype": "TCP", 00:14:52.851 "adrfam": "IPv4", 00:14:52.851 "traddr": "10.0.0.2", 00:14:52.851 "trsvcid": "4420" 00:14:52.851 }, 00:14:52.851 "peer_address": { 00:14:52.851 "trtype": "TCP", 00:14:52.851 "adrfam": "IPv4", 00:14:52.851 "traddr": "10.0.0.1", 00:14:52.851 "trsvcid": "37990" 00:14:52.851 }, 00:14:52.851 "auth": { 00:14:52.851 "state": "completed", 00:14:52.851 "digest": "sha256", 00:14:52.851 "dhgroup": "ffdhe8192" 00:14:52.851 } 00:14:52.851 } 00:14:52.851 ]' 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.851 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.109 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.109 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.109 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.109 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.109 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.366 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:54.297 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.555 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.487 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.487 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.744 { 00:14:55.744 "cntlid": 45, 00:14:55.744 "qid": 0, 00:14:55.744 "state": "enabled", 00:14:55.744 "listen_address": { 00:14:55.744 "trtype": "TCP", 00:14:55.744 "adrfam": "IPv4", 00:14:55.744 "traddr": "10.0.0.2", 00:14:55.744 "trsvcid": "4420" 00:14:55.744 }, 00:14:55.744 "peer_address": { 00:14:55.744 "trtype": "TCP", 00:14:55.744 "adrfam": "IPv4", 00:14:55.744 "traddr": "10.0.0.1", 00:14:55.744 "trsvcid": "38008" 00:14:55.744 }, 00:14:55.744 "auth": { 00:14:55.744 "state": "completed", 00:14:55.744 "digest": "sha256", 00:14:55.744 "dhgroup": "ffdhe8192" 00:14:55.744 } 00:14:55.744 } 00:14:55.744 ]' 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.744 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.002 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.933 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.190 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.121 00:14:58.121 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.121 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.121 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.378 { 00:14:58.378 "cntlid": 47, 00:14:58.378 "qid": 0, 00:14:58.378 "state": "enabled", 00:14:58.378 "listen_address": { 00:14:58.378 "trtype": "TCP", 00:14:58.378 "adrfam": "IPv4", 00:14:58.378 "traddr": "10.0.0.2", 00:14:58.378 "trsvcid": "4420" 00:14:58.378 }, 00:14:58.378 "peer_address": { 00:14:58.378 "trtype": "TCP", 00:14:58.378 "adrfam": "IPv4", 00:14:58.378 "traddr": "10.0.0.1", 00:14:58.378 "trsvcid": "44694" 00:14:58.378 }, 00:14:58.378 "auth": { 00:14:58.378 "state": "completed", 00:14:58.378 "digest": "sha256", 00:14:58.378 "dhgroup": "ffdhe8192" 00:14:58.378 } 00:14:58.378 } 00:14:58.378 ]' 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.378 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.635 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:14:59.567 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.567 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:59.567 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.567 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.567 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.567 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:59.568 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.568 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.568 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.568 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.132 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.389 00:15:00.389 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.389 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.389 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.647 { 00:15:00.647 "cntlid": 49, 00:15:00.647 "qid": 0, 00:15:00.647 "state": "enabled", 00:15:00.647 "listen_address": { 00:15:00.647 "trtype": "TCP", 00:15:00.647 "adrfam": "IPv4", 00:15:00.647 "traddr": "10.0.0.2", 00:15:00.647 "trsvcid": "4420" 00:15:00.647 }, 00:15:00.647 "peer_address": { 00:15:00.647 "trtype": "TCP", 00:15:00.647 "adrfam": "IPv4", 00:15:00.647 "traddr": "10.0.0.1", 00:15:00.647 "trsvcid": "44718" 00:15:00.647 }, 00:15:00.647 "auth": { 00:15:00.647 "state": "completed", 00:15:00.647 "digest": "sha384", 00:15:00.647 "dhgroup": "null" 00:15:00.647 } 00:15:00.647 } 00:15:00.647 ]' 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.647 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.904 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.836 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.093 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.350 00:15:02.350 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.351 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.351 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.608 { 00:15:02.608 "cntlid": 51, 00:15:02.608 "qid": 0, 00:15:02.608 "state": "enabled", 00:15:02.608 "listen_address": { 00:15:02.608 "trtype": "TCP", 00:15:02.608 "adrfam": "IPv4", 00:15:02.608 "traddr": "10.0.0.2", 00:15:02.608 "trsvcid": "4420" 00:15:02.608 }, 00:15:02.608 "peer_address": { 00:15:02.608 "trtype": "TCP", 00:15:02.608 "adrfam": "IPv4", 00:15:02.608 "traddr": "10.0.0.1", 00:15:02.608 "trsvcid": "44738" 00:15:02.608 }, 00:15:02.608 "auth": { 00:15:02.608 "state": "completed", 00:15:02.608 "digest": "sha384", 00:15:02.608 "dhgroup": "null" 00:15:02.608 } 00:15:02.608 } 00:15:02.608 ]' 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.608 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.865 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:02.865 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.865 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.865 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.865 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.123 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:04.054 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.312 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.568 00:15:04.568 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.568 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.568 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.825 { 00:15:04.825 "cntlid": 53, 00:15:04.825 "qid": 0, 00:15:04.825 "state": "enabled", 00:15:04.825 "listen_address": { 00:15:04.825 "trtype": "TCP", 00:15:04.825 "adrfam": "IPv4", 00:15:04.825 "traddr": "10.0.0.2", 00:15:04.825 "trsvcid": "4420" 00:15:04.825 }, 00:15:04.825 "peer_address": { 00:15:04.825 "trtype": "TCP", 00:15:04.825 "adrfam": "IPv4", 00:15:04.825 "traddr": "10.0.0.1", 00:15:04.825 "trsvcid": "44764" 00:15:04.825 }, 00:15:04.825 "auth": { 00:15:04.825 "state": "completed", 00:15:04.825 "digest": "sha384", 00:15:04.825 "dhgroup": "null" 00:15:04.825 } 00:15:04.825 } 00:15:04.825 ]' 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.825 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.082 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:05.082 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.082 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.082 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.082 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.339 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.271 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.529 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.787 00:15:06.787 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.787 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.787 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.044 { 00:15:07.044 "cntlid": 55, 00:15:07.044 "qid": 0, 00:15:07.044 "state": "enabled", 00:15:07.044 "listen_address": { 00:15:07.044 "trtype": "TCP", 00:15:07.044 "adrfam": "IPv4", 00:15:07.044 "traddr": "10.0.0.2", 00:15:07.044 "trsvcid": "4420" 00:15:07.044 }, 00:15:07.044 "peer_address": { 00:15:07.044 "trtype": "TCP", 00:15:07.044 "adrfam": "IPv4", 00:15:07.044 "traddr": "10.0.0.1", 00:15:07.044 "trsvcid": "44774" 00:15:07.044 }, 00:15:07.044 "auth": { 00:15:07.044 "state": "completed", 00:15:07.044 "digest": "sha384", 00:15:07.044 "dhgroup": "null" 00:15:07.044 } 00:15:07.044 } 00:15:07.044 ]' 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.044 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.302 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.302 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.674 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.675 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.675 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.675 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.675 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.675 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.675 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.932 00:15:08.932 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.932 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.932 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.189 { 00:15:09.189 "cntlid": 57, 00:15:09.189 "qid": 0, 00:15:09.189 "state": "enabled", 00:15:09.189 "listen_address": { 00:15:09.189 "trtype": "TCP", 00:15:09.189 "adrfam": "IPv4", 00:15:09.189 "traddr": "10.0.0.2", 00:15:09.189 "trsvcid": "4420" 00:15:09.189 }, 00:15:09.189 "peer_address": { 00:15:09.189 "trtype": "TCP", 00:15:09.189 "adrfam": "IPv4", 00:15:09.189 "traddr": "10.0.0.1", 00:15:09.189 "trsvcid": "39848" 00:15:09.189 }, 00:15:09.189 "auth": { 00:15:09.189 "state": "completed", 00:15:09.189 "digest": "sha384", 00:15:09.189 "dhgroup": "ffdhe2048" 00:15:09.189 } 00:15:09.189 } 00:15:09.189 ]' 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.189 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.446 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.446 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.446 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.704 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.638 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.202 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.202 { 00:15:11.202 "cntlid": 59, 00:15:11.202 "qid": 0, 00:15:11.202 "state": "enabled", 00:15:11.202 "listen_address": { 00:15:11.202 "trtype": "TCP", 00:15:11.202 "adrfam": "IPv4", 00:15:11.202 "traddr": "10.0.0.2", 00:15:11.202 "trsvcid": "4420" 00:15:11.202 }, 00:15:11.202 "peer_address": { 00:15:11.202 "trtype": "TCP", 00:15:11.202 "adrfam": "IPv4", 00:15:11.202 "traddr": "10.0.0.1", 00:15:11.202 "trsvcid": "39874" 00:15:11.202 }, 00:15:11.202 "auth": { 00:15:11.202 "state": "completed", 00:15:11.202 "digest": "sha384", 00:15:11.202 "dhgroup": "ffdhe2048" 00:15:11.202 } 00:15:11.202 } 00:15:11.202 ]' 00:15:11.202 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.458 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.715 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.646 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.904 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.161 00:15:13.161 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.161 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.161 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.418 { 00:15:13.418 "cntlid": 61, 00:15:13.418 "qid": 0, 00:15:13.418 "state": "enabled", 00:15:13.418 "listen_address": { 00:15:13.418 "trtype": "TCP", 00:15:13.418 "adrfam": "IPv4", 00:15:13.418 "traddr": "10.0.0.2", 00:15:13.418 "trsvcid": "4420" 00:15:13.418 }, 00:15:13.418 "peer_address": { 00:15:13.418 "trtype": "TCP", 00:15:13.418 "adrfam": "IPv4", 00:15:13.418 "traddr": "10.0.0.1", 00:15:13.418 "trsvcid": "39892" 00:15:13.418 }, 00:15:13.418 "auth": { 00:15:13.418 "state": "completed", 00:15:13.418 "digest": "sha384", 00:15:13.418 "dhgroup": "ffdhe2048" 00:15:13.418 } 00:15:13.418 } 00:15:13.418 ]' 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.676 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.676 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.676 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.676 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.676 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.933 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.864 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.121 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.378 00:15:15.378 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.378 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.378 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.635 { 00:15:15.635 "cntlid": 63, 00:15:15.635 "qid": 0, 00:15:15.635 "state": "enabled", 00:15:15.635 "listen_address": { 00:15:15.635 "trtype": "TCP", 00:15:15.635 "adrfam": "IPv4", 00:15:15.635 "traddr": "10.0.0.2", 00:15:15.635 "trsvcid": "4420" 00:15:15.635 }, 00:15:15.635 "peer_address": { 00:15:15.635 "trtype": "TCP", 00:15:15.635 "adrfam": "IPv4", 00:15:15.635 "traddr": "10.0.0.1", 00:15:15.635 "trsvcid": "39914" 00:15:15.635 }, 00:15:15.635 "auth": { 00:15:15.635 "state": "completed", 00:15:15.635 "digest": "sha384", 00:15:15.635 "dhgroup": "ffdhe2048" 00:15:15.635 } 00:15:15.635 } 00:15:15.635 ]' 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.635 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.892 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.822 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.079 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.643 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.643 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.900 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.901 { 00:15:17.901 "cntlid": 65, 00:15:17.901 "qid": 0, 00:15:17.901 "state": "enabled", 00:15:17.901 "listen_address": { 00:15:17.901 "trtype": "TCP", 00:15:17.901 "adrfam": "IPv4", 00:15:17.901 "traddr": "10.0.0.2", 00:15:17.901 "trsvcid": "4420" 00:15:17.901 }, 00:15:17.901 "peer_address": { 00:15:17.901 "trtype": "TCP", 00:15:17.901 "adrfam": "IPv4", 00:15:17.901 "traddr": "10.0.0.1", 00:15:17.901 "trsvcid": "60520" 00:15:17.901 }, 00:15:17.901 "auth": { 00:15:17.901 "state": "completed", 00:15:17.901 "digest": "sha384", 00:15:17.901 "dhgroup": "ffdhe3072" 00:15:17.901 } 00:15:17.901 } 00:15:17.901 ]' 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.901 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.158 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.348 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.912 00:15:19.912 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.912 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.912 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.170 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.170 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.170 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.170 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.170 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.170 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.170 { 00:15:20.170 "cntlid": 67, 00:15:20.170 "qid": 0, 00:15:20.170 "state": "enabled", 00:15:20.170 "listen_address": { 00:15:20.170 "trtype": "TCP", 00:15:20.170 "adrfam": "IPv4", 00:15:20.170 "traddr": "10.0.0.2", 00:15:20.170 "trsvcid": "4420" 00:15:20.170 }, 00:15:20.170 "peer_address": { 00:15:20.170 "trtype": "TCP", 00:15:20.171 "adrfam": "IPv4", 00:15:20.171 "traddr": "10.0.0.1", 00:15:20.171 "trsvcid": "60552" 00:15:20.171 }, 00:15:20.171 "auth": { 00:15:20.171 "state": "completed", 00:15:20.171 "digest": "sha384", 00:15:20.171 "dhgroup": "ffdhe3072" 00:15:20.171 } 00:15:20.171 } 00:15:20.171 ]' 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.171 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.428 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:21.361 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.361 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.361 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.361 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.361 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.361 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.362 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.362 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.619 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.183 00:15:22.183 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.183 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.183 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.442 { 00:15:22.442 "cntlid": 69, 00:15:22.442 "qid": 0, 00:15:22.442 "state": "enabled", 00:15:22.442 "listen_address": { 00:15:22.442 "trtype": "TCP", 00:15:22.442 "adrfam": "IPv4", 00:15:22.442 "traddr": "10.0.0.2", 00:15:22.442 "trsvcid": "4420" 00:15:22.442 }, 00:15:22.442 "peer_address": { 00:15:22.442 "trtype": "TCP", 00:15:22.442 "adrfam": "IPv4", 00:15:22.442 "traddr": "10.0.0.1", 00:15:22.442 "trsvcid": "60588" 00:15:22.442 }, 00:15:22.442 "auth": { 00:15:22.442 "state": "completed", 00:15:22.442 "digest": "sha384", 00:15:22.442 "dhgroup": "ffdhe3072" 00:15:22.442 } 00:15:22.442 } 00:15:22.442 ]' 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.442 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.699 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.632 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.891 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.458 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.458 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.716 { 00:15:24.716 "cntlid": 71, 00:15:24.716 "qid": 0, 00:15:24.716 "state": "enabled", 00:15:24.716 "listen_address": { 00:15:24.716 "trtype": "TCP", 00:15:24.716 "adrfam": "IPv4", 00:15:24.716 "traddr": "10.0.0.2", 00:15:24.716 "trsvcid": "4420" 00:15:24.716 }, 00:15:24.716 "peer_address": { 00:15:24.716 "trtype": "TCP", 00:15:24.716 "adrfam": "IPv4", 00:15:24.716 "traddr": "10.0.0.1", 00:15:24.716 "trsvcid": "60616" 00:15:24.716 }, 00:15:24.716 "auth": { 00:15:24.716 "state": "completed", 00:15:24.716 "digest": "sha384", 00:15:24.716 "dhgroup": "ffdhe3072" 00:15:24.716 } 00:15:24.716 } 00:15:24.716 ]' 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.716 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.974 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.027 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.298 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.577 00:15:26.577 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.577 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.577 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.853 { 00:15:26.853 "cntlid": 73, 00:15:26.853 "qid": 0, 00:15:26.853 "state": "enabled", 00:15:26.853 "listen_address": { 00:15:26.853 "trtype": "TCP", 00:15:26.853 "adrfam": "IPv4", 00:15:26.853 "traddr": "10.0.0.2", 00:15:26.853 "trsvcid": "4420" 00:15:26.853 }, 00:15:26.853 "peer_address": { 00:15:26.853 "trtype": "TCP", 00:15:26.853 "adrfam": "IPv4", 00:15:26.853 "traddr": "10.0.0.1", 00:15:26.853 "trsvcid": "60644" 00:15:26.853 }, 00:15:26.853 "auth": { 00:15:26.853 "state": "completed", 00:15:26.853 "digest": "sha384", 00:15:26.853 "dhgroup": "ffdhe4096" 00:15:26.853 } 00:15:26.853 } 00:15:26.853 ]' 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.853 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.130 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.130 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.130 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.131 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.121 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.379 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.946 00:15:28.946 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.946 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.946 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.203 { 00:15:29.203 "cntlid": 75, 00:15:29.203 "qid": 0, 00:15:29.203 "state": "enabled", 00:15:29.203 "listen_address": { 00:15:29.203 "trtype": "TCP", 00:15:29.203 "adrfam": "IPv4", 00:15:29.203 "traddr": "10.0.0.2", 00:15:29.203 "trsvcid": "4420" 00:15:29.203 }, 00:15:29.203 "peer_address": { 00:15:29.203 "trtype": "TCP", 00:15:29.203 "adrfam": "IPv4", 00:15:29.203 "traddr": "10.0.0.1", 00:15:29.203 "trsvcid": "46438" 00:15:29.203 }, 00:15:29.203 "auth": { 00:15:29.203 "state": "completed", 00:15:29.203 "digest": "sha384", 00:15:29.203 "dhgroup": "ffdhe4096" 00:15:29.203 } 00:15:29.203 } 00:15:29.203 ]' 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.203 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.461 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:30.395 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.396 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.653 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.229 00:15:31.229 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.229 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.229 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.487 { 00:15:31.487 "cntlid": 77, 00:15:31.487 "qid": 0, 00:15:31.487 "state": "enabled", 00:15:31.487 "listen_address": { 00:15:31.487 "trtype": "TCP", 00:15:31.487 "adrfam": "IPv4", 00:15:31.487 "traddr": "10.0.0.2", 00:15:31.487 "trsvcid": "4420" 00:15:31.487 }, 00:15:31.487 "peer_address": { 00:15:31.487 "trtype": "TCP", 00:15:31.487 "adrfam": "IPv4", 00:15:31.487 "traddr": "10.0.0.1", 00:15:31.487 "trsvcid": "46470" 00:15:31.487 }, 00:15:31.487 "auth": { 00:15:31.487 "state": "completed", 00:15:31.487 "digest": "sha384", 00:15:31.487 "dhgroup": "ffdhe4096" 00:15:31.487 } 00:15:31.487 } 00:15:31.487 ]' 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.487 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.746 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.679 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.937 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.503 00:15:33.503 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.503 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.503 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.761 { 00:15:33.761 "cntlid": 79, 00:15:33.761 "qid": 0, 00:15:33.761 "state": "enabled", 00:15:33.761 "listen_address": { 00:15:33.761 "trtype": "TCP", 00:15:33.761 "adrfam": "IPv4", 00:15:33.761 "traddr": "10.0.0.2", 00:15:33.761 "trsvcid": "4420" 00:15:33.761 }, 00:15:33.761 "peer_address": { 00:15:33.761 "trtype": "TCP", 00:15:33.761 "adrfam": "IPv4", 00:15:33.761 "traddr": "10.0.0.1", 00:15:33.761 "trsvcid": "46508" 00:15:33.761 }, 00:15:33.761 "auth": { 00:15:33.761 "state": "completed", 00:15:33.761 "digest": "sha384", 00:15:33.761 "dhgroup": "ffdhe4096" 00:15:33.761 } 00:15:33.761 } 00:15:33.761 ]' 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.761 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.019 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.953 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.211 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.775 00:15:35.775 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.775 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.775 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.034 { 00:15:36.034 "cntlid": 81, 00:15:36.034 "qid": 0, 00:15:36.034 "state": "enabled", 00:15:36.034 "listen_address": { 00:15:36.034 "trtype": "TCP", 00:15:36.034 "adrfam": "IPv4", 00:15:36.034 "traddr": "10.0.0.2", 00:15:36.034 "trsvcid": "4420" 00:15:36.034 }, 00:15:36.034 "peer_address": { 00:15:36.034 "trtype": "TCP", 00:15:36.034 "adrfam": "IPv4", 00:15:36.034 "traddr": "10.0.0.1", 00:15:36.034 "trsvcid": "46544" 00:15:36.034 }, 00:15:36.034 "auth": { 00:15:36.034 "state": "completed", 00:15:36.034 "digest": "sha384", 00:15:36.034 "dhgroup": "ffdhe6144" 00:15:36.034 } 00:15:36.034 } 00:15:36.034 ]' 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.034 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.292 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.292 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.292 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.292 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.292 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.550 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.485 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.744 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.310 00:15:38.310 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.310 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.310 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.568 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.568 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.568 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.568 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.569 { 00:15:38.569 "cntlid": 83, 00:15:38.569 "qid": 0, 00:15:38.569 "state": "enabled", 00:15:38.569 "listen_address": { 00:15:38.569 "trtype": "TCP", 00:15:38.569 "adrfam": "IPv4", 00:15:38.569 "traddr": "10.0.0.2", 00:15:38.569 "trsvcid": "4420" 00:15:38.569 }, 00:15:38.569 "peer_address": { 00:15:38.569 "trtype": "TCP", 00:15:38.569 "adrfam": "IPv4", 00:15:38.569 "traddr": "10.0.0.1", 00:15:38.569 "trsvcid": "38280" 00:15:38.569 }, 00:15:38.569 "auth": { 00:15:38.569 "state": "completed", 00:15:38.569 "digest": "sha384", 00:15:38.569 "dhgroup": "ffdhe6144" 00:15:38.569 } 00:15:38.569 } 00:15:38.569 ]' 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.569 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.827 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.761 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.020 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.587 00:15:40.587 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.587 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.587 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.845 { 00:15:40.845 "cntlid": 85, 00:15:40.845 "qid": 0, 00:15:40.845 "state": "enabled", 00:15:40.845 "listen_address": { 00:15:40.845 "trtype": "TCP", 00:15:40.845 "adrfam": "IPv4", 00:15:40.845 "traddr": "10.0.0.2", 00:15:40.845 "trsvcid": "4420" 00:15:40.845 }, 00:15:40.845 "peer_address": { 00:15:40.845 "trtype": "TCP", 00:15:40.845 "adrfam": "IPv4", 00:15:40.845 "traddr": "10.0.0.1", 00:15:40.845 "trsvcid": "38308" 00:15:40.845 }, 00:15:40.845 "auth": { 00:15:40.845 "state": "completed", 00:15:40.845 "digest": "sha384", 00:15:40.845 "dhgroup": "ffdhe6144" 00:15:40.845 } 00:15:40.845 } 00:15:40.845 ]' 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.845 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.103 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.036 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.294 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.860 00:15:42.860 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.860 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.860 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.118 { 00:15:43.118 "cntlid": 87, 00:15:43.118 "qid": 0, 00:15:43.118 "state": "enabled", 00:15:43.118 "listen_address": { 00:15:43.118 "trtype": "TCP", 00:15:43.118 "adrfam": "IPv4", 00:15:43.118 "traddr": "10.0.0.2", 00:15:43.118 "trsvcid": "4420" 00:15:43.118 }, 00:15:43.118 "peer_address": { 00:15:43.118 "trtype": "TCP", 00:15:43.118 "adrfam": "IPv4", 00:15:43.118 "traddr": "10.0.0.1", 00:15:43.118 "trsvcid": "38338" 00:15:43.118 }, 00:15:43.118 "auth": { 00:15:43.118 "state": "completed", 00:15:43.118 "digest": "sha384", 00:15:43.118 "dhgroup": "ffdhe6144" 00:15:43.118 } 00:15:43.118 } 00:15:43.118 ]' 00:15:43.118 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.376 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.635 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.569 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.828 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.761 00:15:45.761 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.761 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.761 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.019 { 00:15:46.019 "cntlid": 89, 00:15:46.019 "qid": 0, 00:15:46.019 "state": "enabled", 00:15:46.019 "listen_address": { 00:15:46.019 "trtype": "TCP", 00:15:46.019 "adrfam": "IPv4", 00:15:46.019 "traddr": "10.0.0.2", 00:15:46.019 "trsvcid": "4420" 00:15:46.019 }, 00:15:46.019 "peer_address": { 00:15:46.019 "trtype": "TCP", 00:15:46.019 "adrfam": "IPv4", 00:15:46.019 "traddr": "10.0.0.1", 00:15:46.019 "trsvcid": "38374" 00:15:46.019 }, 00:15:46.019 "auth": { 00:15:46.019 "state": "completed", 00:15:46.019 "digest": "sha384", 00:15:46.019 "dhgroup": "ffdhe8192" 00:15:46.019 } 00:15:46.019 } 00:15:46.019 ]' 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.019 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.019 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.019 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.019 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.019 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.019 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.278 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.208 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.466 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.398 00:15:48.398 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.398 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.398 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.656 { 00:15:48.656 "cntlid": 91, 00:15:48.656 "qid": 0, 00:15:48.656 "state": "enabled", 00:15:48.656 "listen_address": { 00:15:48.656 "trtype": "TCP", 00:15:48.656 "adrfam": "IPv4", 00:15:48.656 "traddr": "10.0.0.2", 00:15:48.656 "trsvcid": "4420" 00:15:48.656 }, 00:15:48.656 "peer_address": { 00:15:48.656 "trtype": "TCP", 00:15:48.656 "adrfam": "IPv4", 00:15:48.656 "traddr": "10.0.0.1", 00:15:48.656 "trsvcid": "57166" 00:15:48.656 }, 00:15:48.656 "auth": { 00:15:48.656 "state": "completed", 00:15:48.656 "digest": "sha384", 00:15:48.656 "dhgroup": "ffdhe8192" 00:15:48.656 } 00:15:48.656 } 00:15:48.656 ]' 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.656 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.914 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.850 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.107 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.037 00:15:51.037 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.037 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.037 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.295 { 00:15:51.295 "cntlid": 93, 00:15:51.295 "qid": 0, 00:15:51.295 "state": "enabled", 00:15:51.295 "listen_address": { 00:15:51.295 "trtype": "TCP", 00:15:51.295 "adrfam": "IPv4", 00:15:51.295 "traddr": "10.0.0.2", 00:15:51.295 "trsvcid": "4420" 00:15:51.295 }, 00:15:51.295 "peer_address": { 00:15:51.295 "trtype": "TCP", 00:15:51.295 "adrfam": "IPv4", 00:15:51.295 "traddr": "10.0.0.1", 00:15:51.295 "trsvcid": "57200" 00:15:51.295 }, 00:15:51.295 "auth": { 00:15:51.295 "state": "completed", 00:15:51.295 "digest": "sha384", 00:15:51.295 "dhgroup": "ffdhe8192" 00:15:51.295 } 00:15:51.295 } 00:15:51.295 ]' 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.295 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.552 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.552 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.552 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.810 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.741 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.999 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.932 00:15:53.932 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.932 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.932 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.932 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.932 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.932 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.932 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.190 { 00:15:54.190 "cntlid": 95, 00:15:54.190 "qid": 0, 00:15:54.190 "state": "enabled", 00:15:54.190 "listen_address": { 00:15:54.190 "trtype": "TCP", 00:15:54.190 "adrfam": "IPv4", 00:15:54.190 "traddr": "10.0.0.2", 00:15:54.190 "trsvcid": "4420" 00:15:54.190 }, 00:15:54.190 "peer_address": { 00:15:54.190 "trtype": "TCP", 00:15:54.190 "adrfam": "IPv4", 00:15:54.190 "traddr": "10.0.0.1", 00:15:54.190 "trsvcid": "57230" 00:15:54.190 }, 00:15:54.190 "auth": { 00:15:54.190 "state": "completed", 00:15:54.190 "digest": "sha384", 00:15:54.190 "dhgroup": "ffdhe8192" 00:15:54.190 } 00:15:54.190 } 00:15:54.190 ]' 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.190 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.448 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.384 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.641 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.899 00:15:55.899 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.899 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.899 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.157 { 00:15:56.157 "cntlid": 97, 00:15:56.157 "qid": 0, 00:15:56.157 "state": "enabled", 00:15:56.157 "listen_address": { 00:15:56.157 "trtype": "TCP", 00:15:56.157 "adrfam": "IPv4", 00:15:56.157 "traddr": "10.0.0.2", 00:15:56.157 "trsvcid": "4420" 00:15:56.157 }, 00:15:56.157 "peer_address": { 00:15:56.157 "trtype": "TCP", 00:15:56.157 "adrfam": "IPv4", 00:15:56.157 "traddr": "10.0.0.1", 00:15:56.157 "trsvcid": "57260" 00:15:56.157 }, 00:15:56.157 "auth": { 00:15:56.157 "state": "completed", 00:15:56.157 "digest": "sha512", 00:15:56.157 "dhgroup": "null" 00:15:56.157 } 00:15:56.157 } 00:15:56.157 ]' 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.157 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.415 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:56.415 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.415 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.415 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.415 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.672 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.606 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.865 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.124 00:15:58.124 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.124 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.124 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.382 { 00:15:58.382 "cntlid": 99, 00:15:58.382 "qid": 0, 00:15:58.382 "state": "enabled", 00:15:58.382 "listen_address": { 00:15:58.382 "trtype": "TCP", 00:15:58.382 "adrfam": "IPv4", 00:15:58.382 "traddr": "10.0.0.2", 00:15:58.382 "trsvcid": "4420" 00:15:58.382 }, 00:15:58.382 "peer_address": { 00:15:58.382 "trtype": "TCP", 00:15:58.382 "adrfam": "IPv4", 00:15:58.382 "traddr": "10.0.0.1", 00:15:58.382 "trsvcid": "37910" 00:15:58.382 }, 00:15:58.382 "auth": { 00:15:58.382 "state": "completed", 00:15:58.382 "digest": "sha512", 00:15:58.382 "dhgroup": "null" 00:15:58.382 } 00:15:58.382 } 00:15:58.382 ]' 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:58.382 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.640 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.640 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.640 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.640 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:59.574 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.139 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.139 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.139 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.139 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.397 00:16:00.397 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.397 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.397 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.655 { 00:16:00.655 "cntlid": 101, 00:16:00.655 "qid": 0, 00:16:00.655 "state": "enabled", 00:16:00.655 "listen_address": { 00:16:00.655 "trtype": "TCP", 00:16:00.655 "adrfam": "IPv4", 00:16:00.655 "traddr": "10.0.0.2", 00:16:00.655 "trsvcid": "4420" 00:16:00.655 }, 00:16:00.655 "peer_address": { 00:16:00.655 "trtype": "TCP", 00:16:00.655 "adrfam": "IPv4", 00:16:00.655 "traddr": "10.0.0.1", 00:16:00.655 "trsvcid": "37942" 00:16:00.655 }, 00:16:00.655 "auth": { 00:16:00.655 "state": "completed", 00:16:00.655 "digest": "sha512", 00:16:00.655 "dhgroup": "null" 00:16:00.655 } 00:16:00.655 } 00:16:00.655 ]' 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.655 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.913 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:01.846 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.104 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.669 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.669 { 00:16:02.669 "cntlid": 103, 00:16:02.669 "qid": 0, 00:16:02.669 "state": "enabled", 00:16:02.669 "listen_address": { 00:16:02.669 "trtype": "TCP", 00:16:02.669 "adrfam": "IPv4", 00:16:02.669 "traddr": "10.0.0.2", 00:16:02.669 "trsvcid": "4420" 00:16:02.669 }, 00:16:02.669 "peer_address": { 00:16:02.669 "trtype": "TCP", 00:16:02.669 "adrfam": "IPv4", 00:16:02.669 "traddr": "10.0.0.1", 00:16:02.669 "trsvcid": "37964" 00:16:02.669 }, 00:16:02.669 "auth": { 00:16:02.669 "state": "completed", 00:16:02.669 "digest": "sha512", 00:16:02.669 "dhgroup": "null" 00:16:02.669 } 00:16:02.669 } 00:16:02.669 ]' 00:16:02.669 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.927 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.185 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.116 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.373 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.630 00:16:04.630 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.630 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.630 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.888 { 00:16:04.888 "cntlid": 105, 00:16:04.888 "qid": 0, 00:16:04.888 "state": "enabled", 00:16:04.888 "listen_address": { 00:16:04.888 "trtype": "TCP", 00:16:04.888 "adrfam": "IPv4", 00:16:04.888 "traddr": "10.0.0.2", 00:16:04.888 "trsvcid": "4420" 00:16:04.888 }, 00:16:04.888 "peer_address": { 00:16:04.888 "trtype": "TCP", 00:16:04.888 "adrfam": "IPv4", 00:16:04.888 "traddr": "10.0.0.1", 00:16:04.888 "trsvcid": "37982" 00:16:04.888 }, 00:16:04.888 "auth": { 00:16:04.888 "state": "completed", 00:16:04.888 "digest": "sha512", 00:16:04.888 "dhgroup": "ffdhe2048" 00:16:04.888 } 00:16:04.888 } 00:16:04.888 ]' 00:16:04.888 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.145 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.402 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.334 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.591 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:06.591 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.592 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.849 00:16:06.849 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.849 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.849 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.106 { 00:16:07.106 "cntlid": 107, 00:16:07.106 "qid": 0, 00:16:07.106 "state": "enabled", 00:16:07.106 "listen_address": { 00:16:07.106 "trtype": "TCP", 00:16:07.106 "adrfam": "IPv4", 00:16:07.106 "traddr": "10.0.0.2", 00:16:07.106 "trsvcid": "4420" 00:16:07.106 }, 00:16:07.106 "peer_address": { 00:16:07.106 "trtype": "TCP", 00:16:07.106 "adrfam": "IPv4", 00:16:07.106 "traddr": "10.0.0.1", 00:16:07.106 "trsvcid": "38002" 00:16:07.106 }, 00:16:07.106 "auth": { 00:16:07.106 "state": "completed", 00:16:07.106 "digest": "sha512", 00:16:07.106 "dhgroup": "ffdhe2048" 00:16:07.106 } 00:16:07.106 } 00:16:07.106 ]' 00:16:07.106 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.363 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.620 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.615 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.897 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.158 00:16:09.158 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.158 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.158 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.415 { 00:16:09.415 "cntlid": 109, 00:16:09.415 "qid": 0, 00:16:09.415 "state": "enabled", 00:16:09.415 "listen_address": { 00:16:09.415 "trtype": "TCP", 00:16:09.415 "adrfam": "IPv4", 00:16:09.415 "traddr": "10.0.0.2", 00:16:09.415 "trsvcid": "4420" 00:16:09.415 }, 00:16:09.415 "peer_address": { 00:16:09.415 "trtype": "TCP", 00:16:09.415 "adrfam": "IPv4", 00:16:09.415 "traddr": "10.0.0.1", 00:16:09.415 "trsvcid": "55084" 00:16:09.415 }, 00:16:09.415 "auth": { 00:16:09.415 "state": "completed", 00:16:09.415 "digest": "sha512", 00:16:09.415 "dhgroup": "ffdhe2048" 00:16:09.415 } 00:16:09.415 } 00:16:09.415 ]' 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.415 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.672 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.604 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.861 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:10.862 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.862 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.862 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.862 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.862 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.119 00:16:11.119 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.119 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.119 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.376 { 00:16:11.376 "cntlid": 111, 00:16:11.376 "qid": 0, 00:16:11.376 "state": "enabled", 00:16:11.376 "listen_address": { 00:16:11.376 "trtype": "TCP", 00:16:11.376 "adrfam": "IPv4", 00:16:11.376 "traddr": "10.0.0.2", 00:16:11.376 "trsvcid": "4420" 00:16:11.376 }, 00:16:11.376 "peer_address": { 00:16:11.376 "trtype": "TCP", 00:16:11.376 "adrfam": "IPv4", 00:16:11.376 "traddr": "10.0.0.1", 00:16:11.376 "trsvcid": "55110" 00:16:11.376 }, 00:16:11.376 "auth": { 00:16:11.376 "state": "completed", 00:16:11.376 "digest": "sha512", 00:16:11.376 "dhgroup": "ffdhe2048" 00:16:11.376 } 00:16:11.376 } 00:16:11.376 ]' 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.376 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.633 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.633 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.633 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.633 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.633 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.890 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.823 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.080 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.080 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.080 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.080 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.337 00:16:13.337 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.337 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.337 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.596 { 00:16:13.596 "cntlid": 113, 00:16:13.596 "qid": 0, 00:16:13.596 "state": "enabled", 00:16:13.596 "listen_address": { 00:16:13.596 "trtype": "TCP", 00:16:13.596 "adrfam": "IPv4", 00:16:13.596 "traddr": "10.0.0.2", 00:16:13.596 "trsvcid": "4420" 00:16:13.596 }, 00:16:13.596 "peer_address": { 00:16:13.596 "trtype": "TCP", 00:16:13.596 "adrfam": "IPv4", 00:16:13.596 "traddr": "10.0.0.1", 00:16:13.596 "trsvcid": "55136" 00:16:13.596 }, 00:16:13.596 "auth": { 00:16:13.596 "state": "completed", 00:16:13.596 "digest": "sha512", 00:16:13.596 "dhgroup": "ffdhe3072" 00:16:13.596 } 00:16:13.596 } 00:16:13.596 ]' 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.596 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.853 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.853 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.853 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.110 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.044 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.301 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.558 00:16:15.558 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.558 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.558 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.816 { 00:16:15.816 "cntlid": 115, 00:16:15.816 "qid": 0, 00:16:15.816 "state": "enabled", 00:16:15.816 "listen_address": { 00:16:15.816 "trtype": "TCP", 00:16:15.816 "adrfam": "IPv4", 00:16:15.816 "traddr": "10.0.0.2", 00:16:15.816 "trsvcid": "4420" 00:16:15.816 }, 00:16:15.816 "peer_address": { 00:16:15.816 "trtype": "TCP", 00:16:15.816 "adrfam": "IPv4", 00:16:15.816 "traddr": "10.0.0.1", 00:16:15.816 "trsvcid": "55148" 00:16:15.816 }, 00:16:15.816 "auth": { 00:16:15.816 "state": "completed", 00:16:15.816 "digest": "sha512", 00:16:15.816 "dhgroup": "ffdhe3072" 00:16:15.816 } 00:16:15.816 } 00:16:15.816 ]' 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.816 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.074 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.006 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.603 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.861 00:16:17.861 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.861 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.861 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.119 { 00:16:18.119 "cntlid": 117, 00:16:18.119 "qid": 0, 00:16:18.119 "state": "enabled", 00:16:18.119 "listen_address": { 00:16:18.119 "trtype": "TCP", 00:16:18.119 "adrfam": "IPv4", 00:16:18.119 "traddr": "10.0.0.2", 00:16:18.119 "trsvcid": "4420" 00:16:18.119 }, 00:16:18.119 "peer_address": { 00:16:18.119 "trtype": "TCP", 00:16:18.119 "adrfam": "IPv4", 00:16:18.119 "traddr": "10.0.0.1", 00:16:18.119 "trsvcid": "33248" 00:16:18.119 }, 00:16:18.119 "auth": { 00:16:18.119 "state": "completed", 00:16:18.119 "digest": "sha512", 00:16:18.119 "dhgroup": "ffdhe3072" 00:16:18.119 } 00:16:18.119 } 00:16:18.119 ]' 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.119 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.376 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.306 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.564 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.565 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.130 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.130 { 00:16:20.130 "cntlid": 119, 00:16:20.130 "qid": 0, 00:16:20.130 "state": "enabled", 00:16:20.130 "listen_address": { 00:16:20.130 "trtype": "TCP", 00:16:20.130 "adrfam": "IPv4", 00:16:20.130 "traddr": "10.0.0.2", 00:16:20.130 "trsvcid": "4420" 00:16:20.130 }, 00:16:20.130 "peer_address": { 00:16:20.130 "trtype": "TCP", 00:16:20.130 "adrfam": "IPv4", 00:16:20.130 "traddr": "10.0.0.1", 00:16:20.130 "trsvcid": "33270" 00:16:20.130 }, 00:16:20.130 "auth": { 00:16:20.130 "state": "completed", 00:16:20.130 "digest": "sha512", 00:16:20.130 "dhgroup": "ffdhe3072" 00:16:20.130 } 00:16:20.130 } 00:16:20.130 ]' 00:16:20.130 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.386 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.644 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.576 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.834 20:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.091 00:16:22.091 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.091 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.091 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.350 { 00:16:22.350 "cntlid": 121, 00:16:22.350 "qid": 0, 00:16:22.350 "state": "enabled", 00:16:22.350 "listen_address": { 00:16:22.350 "trtype": "TCP", 00:16:22.350 "adrfam": "IPv4", 00:16:22.350 "traddr": "10.0.0.2", 00:16:22.350 "trsvcid": "4420" 00:16:22.350 }, 00:16:22.350 "peer_address": { 00:16:22.350 "trtype": "TCP", 00:16:22.350 "adrfam": "IPv4", 00:16:22.350 "traddr": "10.0.0.1", 00:16:22.350 "trsvcid": "33304" 00:16:22.350 }, 00:16:22.350 "auth": { 00:16:22.350 "state": "completed", 00:16:22.350 "digest": "sha512", 00:16:22.350 "dhgroup": "ffdhe4096" 00:16:22.350 } 00:16:22.350 } 00:16:22.350 ]' 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.350 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.608 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.608 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.608 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.608 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.608 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.867 20:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.797 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.054 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.054 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.054 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.310 00:16:24.310 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.310 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.310 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.566 { 00:16:24.566 "cntlid": 123, 00:16:24.566 "qid": 0, 00:16:24.566 "state": "enabled", 00:16:24.566 "listen_address": { 00:16:24.566 "trtype": "TCP", 00:16:24.566 "adrfam": "IPv4", 00:16:24.566 "traddr": "10.0.0.2", 00:16:24.566 "trsvcid": "4420" 00:16:24.566 }, 00:16:24.566 "peer_address": { 00:16:24.566 "trtype": "TCP", 00:16:24.566 "adrfam": "IPv4", 00:16:24.566 "traddr": "10.0.0.1", 00:16:24.566 "trsvcid": "33328" 00:16:24.566 }, 00:16:24.566 "auth": { 00:16:24.566 "state": "completed", 00:16:24.566 "digest": "sha512", 00:16:24.566 "dhgroup": "ffdhe4096" 00:16:24.566 } 00:16:24.566 } 00:16:24.566 ]' 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.566 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.823 20:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.756 20:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.014 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.597 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.597 { 00:16:26.597 "cntlid": 125, 00:16:26.597 "qid": 0, 00:16:26.597 "state": "enabled", 00:16:26.597 "listen_address": { 00:16:26.597 "trtype": "TCP", 00:16:26.597 "adrfam": "IPv4", 00:16:26.597 "traddr": "10.0.0.2", 00:16:26.597 "trsvcid": "4420" 00:16:26.597 }, 00:16:26.597 "peer_address": { 00:16:26.597 "trtype": "TCP", 00:16:26.597 "adrfam": "IPv4", 00:16:26.597 "traddr": "10.0.0.1", 00:16:26.597 "trsvcid": "33366" 00:16:26.597 }, 00:16:26.597 "auth": { 00:16:26.597 "state": "completed", 00:16:26.597 "digest": "sha512", 00:16:26.597 "dhgroup": "ffdhe4096" 00:16:26.597 } 00:16:26.597 } 00:16:26.597 ]' 00:16:26.597 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.855 20:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.112 20:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.050 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.308 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:28.308 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.308 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.308 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:28.308 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.308 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.309 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:28.309 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.309 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.309 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.309 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.309 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.566 00:16:28.566 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.566 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.566 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.824 { 00:16:28.824 "cntlid": 127, 00:16:28.824 "qid": 0, 00:16:28.824 "state": "enabled", 00:16:28.824 "listen_address": { 00:16:28.824 "trtype": "TCP", 00:16:28.824 "adrfam": "IPv4", 00:16:28.824 "traddr": "10.0.0.2", 00:16:28.824 "trsvcid": "4420" 00:16:28.824 }, 00:16:28.824 "peer_address": { 00:16:28.824 "trtype": "TCP", 00:16:28.824 "adrfam": "IPv4", 00:16:28.824 "traddr": "10.0.0.1", 00:16:28.824 "trsvcid": "44388" 00:16:28.824 }, 00:16:28.824 "auth": { 00:16:28.824 "state": "completed", 00:16:28.824 "digest": "sha512", 00:16:28.824 "dhgroup": "ffdhe4096" 00:16:28.824 } 00:16:28.824 } 00:16:28.824 ]' 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.824 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.081 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.081 20:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.081 20:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.081 20:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.081 20:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.339 20:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.272 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.529 20:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.093 00:16:31.093 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.093 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.093 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.351 { 00:16:31.351 "cntlid": 129, 00:16:31.351 "qid": 0, 00:16:31.351 "state": "enabled", 00:16:31.351 "listen_address": { 00:16:31.351 "trtype": "TCP", 00:16:31.351 "adrfam": "IPv4", 00:16:31.351 "traddr": "10.0.0.2", 00:16:31.351 "trsvcid": "4420" 00:16:31.351 }, 00:16:31.351 "peer_address": { 00:16:31.351 "trtype": "TCP", 00:16:31.351 "adrfam": "IPv4", 00:16:31.351 "traddr": "10.0.0.1", 00:16:31.351 "trsvcid": "44422" 00:16:31.351 }, 00:16:31.351 "auth": { 00:16:31.351 "state": "completed", 00:16:31.351 "digest": "sha512", 00:16:31.351 "dhgroup": "ffdhe6144" 00:16:31.351 } 00:16:31.351 } 00:16:31.351 ]' 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.351 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.609 20:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.982 20:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.982 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.548 00:16:33.548 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.548 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.548 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.805 { 00:16:33.805 "cntlid": 131, 00:16:33.805 "qid": 0, 00:16:33.805 "state": "enabled", 00:16:33.805 "listen_address": { 00:16:33.805 "trtype": "TCP", 00:16:33.805 "adrfam": "IPv4", 00:16:33.805 "traddr": "10.0.0.2", 00:16:33.805 "trsvcid": "4420" 00:16:33.805 }, 00:16:33.805 "peer_address": { 00:16:33.805 "trtype": "TCP", 00:16:33.805 "adrfam": "IPv4", 00:16:33.805 "traddr": "10.0.0.1", 00:16:33.805 "trsvcid": "44452" 00:16:33.805 }, 00:16:33.805 "auth": { 00:16:33.805 "state": "completed", 00:16:33.805 "digest": "sha512", 00:16:33.805 "dhgroup": "ffdhe6144" 00:16:33.805 } 00:16:33.805 } 00:16:33.805 ]' 00:16:33.805 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.063 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.063 20:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.063 20:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.063 20:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.063 20:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.063 20:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.063 20:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.320 20:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.254 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.511 20:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.077 00:16:36.077 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.077 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.077 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.335 { 00:16:36.335 "cntlid": 133, 00:16:36.335 "qid": 0, 00:16:36.335 "state": "enabled", 00:16:36.335 "listen_address": { 00:16:36.335 "trtype": "TCP", 00:16:36.335 "adrfam": "IPv4", 00:16:36.335 "traddr": "10.0.0.2", 00:16:36.335 "trsvcid": "4420" 00:16:36.335 }, 00:16:36.335 "peer_address": { 00:16:36.335 "trtype": "TCP", 00:16:36.335 "adrfam": "IPv4", 00:16:36.335 "traddr": "10.0.0.1", 00:16:36.335 "trsvcid": "44478" 00:16:36.335 }, 00:16:36.335 "auth": { 00:16:36.335 "state": "completed", 00:16:36.335 "digest": "sha512", 00:16:36.335 "dhgroup": "ffdhe6144" 00:16:36.335 } 00:16:36.335 } 00:16:36.335 ]' 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.335 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.592 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.592 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.592 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.849 20:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.782 20:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:38.039 20:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.040 20:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.040 20:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.040 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.040 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.604 00:16:38.604 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.604 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.604 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.862 { 00:16:38.862 "cntlid": 135, 00:16:38.862 "qid": 0, 00:16:38.862 "state": "enabled", 00:16:38.862 "listen_address": { 00:16:38.862 "trtype": "TCP", 00:16:38.862 "adrfam": "IPv4", 00:16:38.862 "traddr": "10.0.0.2", 00:16:38.862 "trsvcid": "4420" 00:16:38.862 }, 00:16:38.862 "peer_address": { 00:16:38.862 "trtype": "TCP", 00:16:38.862 "adrfam": "IPv4", 00:16:38.862 "traddr": "10.0.0.1", 00:16:38.862 "trsvcid": "45734" 00:16:38.862 }, 00:16:38.862 "auth": { 00:16:38.862 "state": "completed", 00:16:38.862 "digest": "sha512", 00:16:38.862 "dhgroup": "ffdhe6144" 00:16:38.862 } 00:16:38.862 } 00:16:38.862 ]' 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.862 20:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.119 20:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.119 20:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.119 20:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.377 20:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.308 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.566 20:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.499 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.499 { 00:16:41.499 "cntlid": 137, 00:16:41.499 "qid": 0, 00:16:41.499 "state": "enabled", 00:16:41.499 "listen_address": { 00:16:41.499 "trtype": "TCP", 00:16:41.499 "adrfam": "IPv4", 00:16:41.499 "traddr": "10.0.0.2", 00:16:41.499 "trsvcid": "4420" 00:16:41.499 }, 00:16:41.499 "peer_address": { 00:16:41.499 "trtype": "TCP", 00:16:41.499 "adrfam": "IPv4", 00:16:41.499 "traddr": "10.0.0.1", 00:16:41.499 "trsvcid": "45768" 00:16:41.499 }, 00:16:41.499 "auth": { 00:16:41.499 "state": "completed", 00:16:41.499 "digest": "sha512", 00:16:41.499 "dhgroup": "ffdhe8192" 00:16:41.499 } 00:16:41.499 } 00:16:41.499 ]' 00:16:41.499 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.758 20:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.016 20:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.950 20:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.207 20:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.208 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.208 20:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.140 00:16:44.140 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.140 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.140 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.398 { 00:16:44.398 "cntlid": 139, 00:16:44.398 "qid": 0, 00:16:44.398 "state": "enabled", 00:16:44.398 "listen_address": { 00:16:44.398 "trtype": "TCP", 00:16:44.398 "adrfam": "IPv4", 00:16:44.398 "traddr": "10.0.0.2", 00:16:44.398 "trsvcid": "4420" 00:16:44.398 }, 00:16:44.398 "peer_address": { 00:16:44.398 "trtype": "TCP", 00:16:44.398 "adrfam": "IPv4", 00:16:44.398 "traddr": "10.0.0.1", 00:16:44.398 "trsvcid": "45792" 00:16:44.398 }, 00:16:44.398 "auth": { 00:16:44.398 "state": "completed", 00:16:44.398 "digest": "sha512", 00:16:44.398 "dhgroup": "ffdhe8192" 00:16:44.398 } 00:16:44.398 } 00:16:44.398 ]' 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.398 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.656 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NWUwZDZmYzkyNTBkNmRiZTY3Y2ZjMDIwODU3NGI5OWZTxq7R: --dhchap-ctrl-secret DHHC-1:02:YmVhNjhkNTVjNGY1YmUwZDc4ZTk1OTIzZWIwMzE1ZDU4M2ViMmRlNTJkZjk4ZGRl5NAoWA==: 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.588 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.845 20:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.923 00:16:46.923 20:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.923 20:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.923 20:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.923 { 00:16:46.923 "cntlid": 141, 00:16:46.923 "qid": 0, 00:16:46.923 "state": "enabled", 00:16:46.923 "listen_address": { 00:16:46.923 "trtype": "TCP", 00:16:46.923 "adrfam": "IPv4", 00:16:46.923 "traddr": "10.0.0.2", 00:16:46.923 "trsvcid": "4420" 00:16:46.923 }, 00:16:46.923 "peer_address": { 00:16:46.923 "trtype": "TCP", 00:16:46.923 "adrfam": "IPv4", 00:16:46.923 "traddr": "10.0.0.1", 00:16:46.923 "trsvcid": "45816" 00:16:46.923 }, 00:16:46.923 "auth": { 00:16:46.923 "state": "completed", 00:16:46.923 "digest": "sha512", 00:16:46.923 "dhgroup": "ffdhe8192" 00:16:46.923 } 00:16:46.923 } 00:16:46.923 ]' 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.923 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.181 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.181 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.181 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.181 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.181 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.438 20:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDBhZmIxZmNmZmUyMmNkYmRiOGNkMzkxZTIyNmM0ZGI2NDcxMzAzOTc3ZjBjZTA3+ZhsHw==: --dhchap-ctrl-secret DHHC-1:01:YzhlMGQzZjE2ODUxZTI5MDU0MjM5MTIyYWQ0YTExOThWm3ye: 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.370 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.628 20:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.561 00:16:49.561 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.561 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.561 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.818 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.818 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.818 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.818 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.818 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.818 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.818 { 00:16:49.818 "cntlid": 143, 00:16:49.818 "qid": 0, 00:16:49.818 "state": "enabled", 00:16:49.818 "listen_address": { 00:16:49.818 "trtype": "TCP", 00:16:49.818 "adrfam": "IPv4", 00:16:49.818 "traddr": "10.0.0.2", 00:16:49.818 "trsvcid": "4420" 00:16:49.819 }, 00:16:49.819 "peer_address": { 00:16:49.819 "trtype": "TCP", 00:16:49.819 "adrfam": "IPv4", 00:16:49.819 "traddr": "10.0.0.1", 00:16:49.819 "trsvcid": "52002" 00:16:49.819 }, 00:16:49.819 "auth": { 00:16:49.819 "state": "completed", 00:16:49.819 "digest": "sha512", 00:16:49.819 "dhgroup": "ffdhe8192" 00:16:49.819 } 00:16:49.819 } 00:16:49.819 ]' 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.819 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.076 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:51.009 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:51.009 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.267 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.200 00:16:52.200 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.200 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.200 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.457 { 00:16:52.457 "cntlid": 145, 00:16:52.457 "qid": 0, 00:16:52.457 "state": "enabled", 00:16:52.457 "listen_address": { 00:16:52.457 "trtype": "TCP", 00:16:52.457 "adrfam": "IPv4", 00:16:52.457 "traddr": "10.0.0.2", 00:16:52.457 "trsvcid": "4420" 00:16:52.457 }, 00:16:52.457 "peer_address": { 00:16:52.457 "trtype": "TCP", 00:16:52.457 "adrfam": "IPv4", 00:16:52.457 "traddr": "10.0.0.1", 00:16:52.457 "trsvcid": "52028" 00:16:52.457 }, 00:16:52.457 "auth": { 00:16:52.457 "state": "completed", 00:16:52.457 "digest": "sha512", 00:16:52.457 "dhgroup": "ffdhe8192" 00:16:52.457 } 00:16:52.457 } 00:16:52.457 ]' 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.457 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.458 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.458 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.458 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.458 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.458 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.715 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjQzNDI1NTE5Zjk2ZjEyZjJmZWI2Y2Y3OTYyZmExYjBmNWJlYzM4NDY0NTEzZjEzu0hpxg==: --dhchap-ctrl-secret DHHC-1:03:NDBhMTg4OTNlZTU1NWQyOTk5ZDg5MWU4ZTJiN2EwZDg3M2FhNDM3M2IxNDNlNTYxNjMwN2ZlOWExMjQ5MDYyNC750lM=: 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:53.664 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.601 request: 00:16:54.601 { 00:16:54.601 "name": "nvme0", 00:16:54.601 "trtype": "tcp", 00:16:54.601 "traddr": "10.0.0.2", 00:16:54.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:54.601 "adrfam": "ipv4", 00:16:54.601 "trsvcid": "4420", 00:16:54.601 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.601 "dhchap_key": "key2", 00:16:54.601 "method": "bdev_nvme_attach_controller", 00:16:54.601 "req_id": 1 00:16:54.601 } 00:16:54.601 Got JSON-RPC error response 00:16:54.601 response: 00:16:54.601 { 00:16:54.601 "code": -32602, 00:16:54.601 "message": "Invalid parameters" 00:16:54.601 } 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:54.601 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.533 request: 00:16:55.533 { 00:16:55.533 "name": "nvme0", 00:16:55.533 "trtype": "tcp", 00:16:55.533 "traddr": "10.0.0.2", 00:16:55.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:55.533 "adrfam": "ipv4", 00:16:55.533 "trsvcid": "4420", 00:16:55.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.533 "dhchap_key": "key1", 00:16:55.533 "dhchap_ctrlr_key": "ckey2", 00:16:55.533 "method": "bdev_nvme_attach_controller", 00:16:55.533 "req_id": 1 00:16:55.533 } 00:16:55.533 Got JSON-RPC error response 00:16:55.533 response: 00:16:55.533 { 00:16:55.533 "code": -32602, 00:16:55.533 "message": "Invalid parameters" 00:16:55.533 } 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.533 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.467 request: 00:16:56.467 { 00:16:56.467 "name": "nvme0", 00:16:56.467 "trtype": "tcp", 00:16:56.467 "traddr": "10.0.0.2", 00:16:56.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:56.467 "adrfam": "ipv4", 00:16:56.467 "trsvcid": "4420", 00:16:56.467 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.467 "dhchap_key": "key1", 00:16:56.467 "dhchap_ctrlr_key": "ckey1", 00:16:56.467 "method": "bdev_nvme_attach_controller", 00:16:56.467 "req_id": 1 00:16:56.467 } 00:16:56.467 Got JSON-RPC error response 00:16:56.467 response: 00:16:56.467 { 00:16:56.467 "code": -32602, 00:16:56.467 "message": "Invalid parameters" 00:16:56.467 } 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 206280 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 206280 ']' 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 206280 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 206280 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 206280' 00:16:56.467 killing process with pid 206280 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 206280 00:16:56.467 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 206280 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=228833 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 228833 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 228833 ']' 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.724 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 228833 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 228833 ']' 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.981 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.240 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.172 00:16:58.172 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.172 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.172 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.430 { 00:16:58.430 "cntlid": 1, 00:16:58.430 "qid": 0, 00:16:58.430 "state": "enabled", 00:16:58.430 "listen_address": { 00:16:58.430 "trtype": "TCP", 00:16:58.430 "adrfam": "IPv4", 00:16:58.430 "traddr": "10.0.0.2", 00:16:58.430 "trsvcid": "4420" 00:16:58.430 }, 00:16:58.430 "peer_address": { 00:16:58.430 "trtype": "TCP", 00:16:58.430 "adrfam": "IPv4", 00:16:58.430 "traddr": "10.0.0.1", 00:16:58.430 "trsvcid": "48516" 00:16:58.430 }, 00:16:58.430 "auth": { 00:16:58.430 "state": "completed", 00:16:58.430 "digest": "sha512", 00:16:58.430 "dhgroup": "ffdhe8192" 00:16:58.430 } 00:16:58.430 } 00:16:58.430 ]' 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.430 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.687 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.687 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.687 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.687 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.687 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.945 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDljMWNhMjYyYzE0YTYxNWNkZWIwYTMxNDYzNWYzNTBlNjMxMTg4MjU4Mjg4ODNhZDgxMGFjMGVjNzg5NDUzYn29EOQ=: 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:59.875 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.132 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.389 request: 00:17:00.389 { 00:17:00.389 "name": "nvme0", 00:17:00.389 "trtype": "tcp", 00:17:00.389 "traddr": "10.0.0.2", 00:17:00.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:00.389 "adrfam": "ipv4", 00:17:00.389 "trsvcid": "4420", 00:17:00.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.389 "dhchap_key": "key3", 00:17:00.389 "method": "bdev_nvme_attach_controller", 00:17:00.389 "req_id": 1 00:17:00.389 } 00:17:00.389 Got JSON-RPC error response 00:17:00.389 response: 00:17:00.389 { 00:17:00.389 "code": -32602, 00:17:00.389 "message": "Invalid parameters" 00:17:00.389 } 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:00.389 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.648 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.906 request: 00:17:00.906 { 00:17:00.906 "name": "nvme0", 00:17:00.906 "trtype": "tcp", 00:17:00.906 "traddr": "10.0.0.2", 00:17:00.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:00.906 "adrfam": "ipv4", 00:17:00.906 "trsvcid": "4420", 00:17:00.906 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.906 "dhchap_key": "key3", 00:17:00.906 "method": "bdev_nvme_attach_controller", 00:17:00.906 "req_id": 1 00:17:00.906 } 00:17:00.906 Got JSON-RPC error response 00:17:00.906 response: 00:17:00.906 { 00:17:00.906 "code": -32602, 00:17:00.906 "message": "Invalid parameters" 00:17:00.906 } 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@173 -- # trap - SIGINT SIGTERM EXIT 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@174 -- # cleanup 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 206299 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 206299 ']' 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 206299 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 206299 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 206299' 00:17:00.906 killing process with pid 206299 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 206299 00:17:00.906 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 206299 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.164 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.164 rmmod nvme_tcp 00:17:01.164 rmmod nvme_fabrics 00:17:01.422 rmmod nvme_keyring 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 228833 ']' 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 228833 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 228833 ']' 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 228833 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 228833 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 228833' 00:17:01.422 killing process with pid 228833 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 228833 00:17:01.422 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 228833 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.681 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.582 20:15:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.582 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gvz /tmp/spdk.key-sha256.93h /tmp/spdk.key-sha384.Vfw /tmp/spdk.key-sha512.zfg /tmp/spdk.key-sha512.O0y /tmp/spdk.key-sha384.eFX /tmp/spdk.key-sha256.8j3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:03.582 00:17:03.582 real 3m7.249s 00:17:03.582 user 7m14.551s 00:17:03.582 sys 0m22.095s 00:17:03.582 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:03.582 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.582 ************************************ 00:17:03.582 END TEST nvmf_auth_target 00:17:03.582 ************************************ 00:17:03.582 20:15:50 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:03.582 20:15:50 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:03.582 20:15:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:03.582 20:15:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:03.582 20:15:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.841 ************************************ 00:17:03.841 START TEST nvmf_bdevio_no_huge 00:17:03.841 ************************************ 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:03.841 * Looking for test storage... 00:17:03.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.841 20:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:05.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:05.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:05.740 Found net devices under 0000:09:00.0: cvl_0_0 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:05.740 Found net devices under 0000:09:00.1: cvl_0_1 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.740 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:17:05.740 00:17:05.740 --- 10.0.0.2 ping statistics --- 00:17:05.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.740 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:17:05.741 00:17:05.741 --- 10.0.0.1 ping statistics --- 00:17:05.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.741 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=231341 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 231341 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 231341 ']' 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.741 20:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.741 [2024-05-16 20:15:52.811608] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:05.741 [2024-05-16 20:15:52.811698] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:05.741 [2024-05-16 20:15:52.882752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.999 [2024-05-16 20:15:52.992360] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.999 [2024-05-16 20:15:52.992424] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.999 [2024-05-16 20:15:52.992453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.999 [2024-05-16 20:15:52.992464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.999 [2024-05-16 20:15:52.992474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.999 [2024-05-16 20:15:52.992561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:05.999 [2024-05-16 20:15:52.992870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:05.999 [2024-05-16 20:15:52.992903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:05.999 [2024-05-16 20:15:52.992906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.999 [2024-05-16 20:15:53.115832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.999 Malloc0 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.999 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.257 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.257 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.257 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.257 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.257 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.257 [2024-05-16 20:15:53.155905] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:06.257 [2024-05-16 20:15:53.156196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.257 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.258 { 00:17:06.258 "params": { 00:17:06.258 "name": "Nvme$subsystem", 00:17:06.258 "trtype": "$TEST_TRANSPORT", 00:17:06.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.258 "adrfam": "ipv4", 00:17:06.258 "trsvcid": "$NVMF_PORT", 00:17:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.258 "hdgst": ${hdgst:-false}, 00:17:06.258 "ddgst": ${ddgst:-false} 00:17:06.258 }, 00:17:06.258 "method": "bdev_nvme_attach_controller" 00:17:06.258 } 00:17:06.258 EOF 00:17:06.258 )") 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:06.258 20:15:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:06.258 "params": { 00:17:06.258 "name": "Nvme1", 00:17:06.258 "trtype": "tcp", 00:17:06.258 "traddr": "10.0.0.2", 00:17:06.258 "adrfam": "ipv4", 00:17:06.258 "trsvcid": "4420", 00:17:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.258 "hdgst": false, 00:17:06.258 "ddgst": false 00:17:06.258 }, 00:17:06.258 "method": "bdev_nvme_attach_controller" 00:17:06.258 }' 00:17:06.258 [2024-05-16 20:15:53.201551] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:06.258 [2024-05-16 20:15:53.201621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid231481 ] 00:17:06.258 [2024-05-16 20:15:53.264639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:06.258 [2024-05-16 20:15:53.380513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.258 [2024-05-16 20:15:53.380565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.258 [2024-05-16 20:15:53.380568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.515 I/O targets: 00:17:06.515 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:06.515 00:17:06.515 00:17:06.515 CUnit - A unit testing framework for C - Version 2.1-3 00:17:06.515 http://cunit.sourceforge.net/ 00:17:06.515 00:17:06.516 00:17:06.516 Suite: bdevio tests on: Nvme1n1 00:17:06.516 Test: blockdev write read block ...passed 00:17:06.516 Test: blockdev write zeroes read block ...passed 00:17:06.516 Test: blockdev write zeroes read no split ...passed 00:17:06.773 Test: blockdev write zeroes read split ...passed 00:17:06.773 Test: blockdev write zeroes read split partial ...passed 00:17:06.773 Test: blockdev reset ...[2024-05-16 20:15:53.701441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:06.773 [2024-05-16 20:15:53.701552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cae20 (9): Bad file descriptor 00:17:06.773 [2024-05-16 20:15:53.718301] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:06.773 passed 00:17:06.773 Test: blockdev write read 8 blocks ...passed 00:17:06.773 Test: blockdev write read size > 128k ...passed 00:17:06.773 Test: blockdev write read invalid size ...passed 00:17:06.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.773 Test: blockdev write read max offset ...passed 00:17:06.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.773 Test: blockdev writev readv 8 blocks ...passed 00:17:06.773 Test: blockdev writev readv 30 x 1block ...passed 00:17:07.031 Test: blockdev writev readv block ...passed 00:17:07.031 Test: blockdev writev readv size > 128k ...passed 00:17:07.031 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:07.031 Test: blockdev comparev and writev ...[2024-05-16 20:15:53.933619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.933657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.933681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.933698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.934035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.934060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.934082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.934099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.934420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.934444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.934465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.934486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.934813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.934836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:53.934864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.031 [2024-05-16 20:15:53.934882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:07.031 passed 00:17:07.031 Test: blockdev nvme passthru rw ...passed 00:17:07.031 Test: blockdev nvme passthru vendor specific ...[2024-05-16 20:15:54.017137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.031 [2024-05-16 20:15:54.017165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:54.017305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.031 [2024-05-16 20:15:54.017328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:54.017466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.031 [2024-05-16 20:15:54.017488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:07.031 [2024-05-16 20:15:54.017632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.031 [2024-05-16 20:15:54.017654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:07.031 passed 00:17:07.031 Test: blockdev nvme admin passthru ...passed 00:17:07.031 Test: blockdev copy ...passed 00:17:07.031 00:17:07.031 Run Summary: Type Total Ran Passed Failed Inactive 00:17:07.031 suites 1 1 n/a 0 0 00:17:07.031 tests 23 23 23 0 0 00:17:07.031 asserts 152 152 152 0 n/a 00:17:07.031 00:17:07.031 Elapsed time = 1.069 seconds 00:17:07.287 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.287 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.287 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.544 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.544 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.545 rmmod nvme_tcp 00:17:07.545 rmmod nvme_fabrics 00:17:07.545 rmmod nvme_keyring 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 231341 ']' 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 231341 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 231341 ']' 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 231341 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 231341 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 231341' 00:17:07.545 killing process with pid 231341 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 231341 00:17:07.545 [2024-05-16 20:15:54.533379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:07.545 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 231341 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.803 20:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.335 20:15:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.335 00:17:10.335 real 0m6.246s 00:17:10.335 user 0m10.005s 00:17:10.335 sys 0m2.356s 00:17:10.335 20:15:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.335 20:15:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.335 ************************************ 00:17:10.335 END TEST nvmf_bdevio_no_huge 00:17:10.335 ************************************ 00:17:10.335 20:15:57 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.335 20:15:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:10.335 20:15:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.335 20:15:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.335 ************************************ 00:17:10.335 START TEST nvmf_tls 00:17:10.335 ************************************ 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.335 * Looking for test storage... 00:17:10.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.335 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.336 20:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:12.236 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:12.236 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:12.236 Found net devices under 0000:09:00.0: cvl_0_0 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:12.236 Found net devices under 0000:09:00.1: cvl_0_1 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.236 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:17:12.236 00:17:12.236 --- 10.0.0.2 ping statistics --- 00:17:12.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.237 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:17:12.237 00:17:12.237 --- 10.0.0.1 ping statistics --- 00:17:12.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.237 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=233549 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 233549 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 233549 ']' 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:12.237 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 [2024-05-16 20:15:59.201255] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:12.237 [2024-05-16 20:15:59.201347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.237 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.237 [2024-05-16 20:15:59.271824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.495 [2024-05-16 20:15:59.393600] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.495 [2024-05-16 20:15:59.393660] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.495 [2024-05-16 20:15:59.393684] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.495 [2024-05-16 20:15:59.393699] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.495 [2024-05-16 20:15:59.393711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.495 [2024-05-16 20:15:59.393741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:12.495 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:12.753 true 00:17:12.753 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.753 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:13.011 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:13.011 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:13.011 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.269 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.269 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:13.526 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:13.526 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:13.526 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:13.785 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.785 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:14.043 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:14.043 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:14.043 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.043 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:14.301 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:14.301 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:14.301 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:14.558 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.558 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:14.815 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:14.815 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:14.815 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:15.073 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.073 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:15.331 20:16:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.6IxzXeuE8q 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.HvK9iJkUq0 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.6IxzXeuE8q 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HvK9iJkUq0 00:17:15.332 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:15.589 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:16.155 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.6IxzXeuE8q 00:17:16.155 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6IxzXeuE8q 00:17:16.155 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.155 [2024-05-16 20:16:03.250627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.155 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.413 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.671 [2024-05-16 20:16:03.739934] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:16.671 [2024-05-16 20:16:03.740052] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.671 [2024-05-16 20:16:03.740262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.671 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.929 malloc0 00:17:16.929 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.494 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6IxzXeuE8q 00:17:17.494 [2024-05-16 20:16:04.574886] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:17.494 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.6IxzXeuE8q 00:17:17.494 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.659 Initializing NVMe Controllers 00:17:27.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:27.659 Initialization complete. Launching workers. 00:17:27.659 ======================================================== 00:17:27.659 Latency(us) 00:17:27.659 Device Information : IOPS MiB/s Average min max 00:17:27.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7646.65 29.87 8372.00 1296.81 10768.28 00:17:27.659 ======================================================== 00:17:27.659 Total : 7646.65 29.87 8372.00 1296.81 10768.28 00:17:27.659 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6IxzXeuE8q 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6IxzXeuE8q' 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=235446 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 235446 /var/tmp/bdevperf.sock 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 235446 ']' 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.659 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.659 [2024-05-16 20:16:14.737750] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:27.659 [2024-05-16 20:16:14.737824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235446 ] 00:17:27.659 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.659 [2024-05-16 20:16:14.797636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.917 [2024-05-16 20:16:14.908471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.917 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:27.917 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:27.917 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6IxzXeuE8q 00:17:28.175 [2024-05-16 20:16:15.235886] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.175 [2024-05-16 20:16:15.236003] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:28.175 TLSTESTn1 00:17:28.433 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:28.433 Running I/O for 10 seconds... 00:17:38.399 00:17:38.399 Latency(us) 00:17:38.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.399 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:38.399 Verification LBA range: start 0x0 length 0x2000 00:17:38.399 TLSTESTn1 : 10.04 3013.20 11.77 0.00 0.00 42374.46 9417.77 52817.16 00:17:38.399 =================================================================================================================== 00:17:38.399 Total : 3013.20 11.77 0.00 0.00 42374.46 9417.77 52817.16 00:17:38.399 0 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 235446 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 235446 ']' 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 235446 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:38.399 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 235446 00:17:38.656 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:38.656 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:38.656 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 235446' 00:17:38.656 killing process with pid 235446 00:17:38.656 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 235446 00:17:38.656 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.656 00:17:38.656 Latency(us) 00:17:38.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.656 =================================================================================================================== 00:17:38.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.656 [2024-05-16 20:16:25.556295] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.656 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 235446 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HvK9iJkUq0 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HvK9iJkUq0 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HvK9iJkUq0 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HvK9iJkUq0' 00:17:38.914 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=236645 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 236645 /var/tmp/bdevperf.sock 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 236645 ']' 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.915 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.915 [2024-05-16 20:16:25.864305] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:38.915 [2024-05-16 20:16:25.864382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236645 ] 00:17:38.915 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.915 [2024-05-16 20:16:25.927041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.915 [2024-05-16 20:16:26.033725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.172 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:39.172 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:39.172 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HvK9iJkUq0 00:17:39.430 [2024-05-16 20:16:26.350482] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.431 [2024-05-16 20:16:26.350619] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:39.431 [2024-05-16 20:16:26.355873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:39.431 [2024-05-16 20:16:26.356368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7aee70 (107): Transport endpoint is not connected 00:17:39.431 [2024-05-16 20:16:26.357357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7aee70 (9): Bad file descriptor 00:17:39.431 [2024-05-16 20:16:26.358357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.431 [2024-05-16 20:16:26.358377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:39.431 [2024-05-16 20:16:26.358408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.431 request: 00:17:39.431 { 00:17:39.431 "name": "TLSTEST", 00:17:39.431 "trtype": "tcp", 00:17:39.431 "traddr": "10.0.0.2", 00:17:39.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.431 "adrfam": "ipv4", 00:17:39.431 "trsvcid": "4420", 00:17:39.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.431 "psk": "/tmp/tmp.HvK9iJkUq0", 00:17:39.431 "method": "bdev_nvme_attach_controller", 00:17:39.431 "req_id": 1 00:17:39.431 } 00:17:39.431 Got JSON-RPC error response 00:17:39.431 response: 00:17:39.431 { 00:17:39.431 "code": -32602, 00:17:39.431 "message": "Invalid parameters" 00:17:39.431 } 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 236645 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 236645 ']' 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 236645 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 236645 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 236645' 00:17:39.431 killing process with pid 236645 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 236645 00:17:39.431 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.431 00:17:39.431 Latency(us) 00:17:39.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.431 =================================================================================================================== 00:17:39.431 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.431 [2024-05-16 20:16:26.402436] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:39.431 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 236645 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6IxzXeuE8q 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6IxzXeuE8q 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6IxzXeuE8q 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6IxzXeuE8q' 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.689 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=236788 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 236788 /var/tmp/bdevperf.sock 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 236788 ']' 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:39.690 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.690 [2024-05-16 20:16:26.686245] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:39.690 [2024-05-16 20:16:26.686321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236788 ] 00:17:39.690 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.690 [2024-05-16 20:16:26.742438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.948 [2024-05-16 20:16:26.845997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.948 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:39.948 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:39.948 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.6IxzXeuE8q 00:17:40.207 [2024-05-16 20:16:27.188659] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.207 [2024-05-16 20:16:27.188805] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.207 [2024-05-16 20:16:27.197509] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:40.207 [2024-05-16 20:16:27.197538] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:40.207 [2024-05-16 20:16:27.197591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.207 [2024-05-16 20:16:27.198605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10e70 (107): Transport endpoint is not connected 00:17:40.207 [2024-05-16 20:16:27.199598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10e70 (9): Bad file descriptor 00:17:40.207 [2024-05-16 20:16:27.200597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:40.207 [2024-05-16 20:16:27.200617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.207 [2024-05-16 20:16:27.200649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.207 request: 00:17:40.207 { 00:17:40.207 "name": "TLSTEST", 00:17:40.207 "trtype": "tcp", 00:17:40.207 "traddr": "10.0.0.2", 00:17:40.207 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:40.207 "adrfam": "ipv4", 00:17:40.207 "trsvcid": "4420", 00:17:40.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.207 "psk": "/tmp/tmp.6IxzXeuE8q", 00:17:40.207 "method": "bdev_nvme_attach_controller", 00:17:40.207 "req_id": 1 00:17:40.207 } 00:17:40.207 Got JSON-RPC error response 00:17:40.207 response: 00:17:40.207 { 00:17:40.207 "code": -32602, 00:17:40.207 "message": "Invalid parameters" 00:17:40.207 } 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 236788 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 236788 ']' 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 236788 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 236788 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 236788' 00:17:40.207 killing process with pid 236788 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 236788 00:17:40.207 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.207 00:17:40.207 Latency(us) 00:17:40.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.207 =================================================================================================================== 00:17:40.207 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.207 [2024-05-16 20:16:27.250643] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.207 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 236788 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6IxzXeuE8q 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6IxzXeuE8q 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6IxzXeuE8q 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6IxzXeuE8q' 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=236920 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 236920 /var/tmp/bdevperf.sock 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 236920 ']' 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.465 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 [2024-05-16 20:16:27.542404] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:40.466 [2024-05-16 20:16:27.542479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236920 ] 00:17:40.466 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.466 [2024-05-16 20:16:27.603219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.723 [2024-05-16 20:16:27.712613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.723 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.723 20:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:40.723 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6IxzXeuE8q 00:17:40.980 [2024-05-16 20:16:28.031723] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.980 [2024-05-16 20:16:28.031851] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.980 [2024-05-16 20:16:28.039639] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:40.980 [2024-05-16 20:16:28.039667] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:40.980 [2024-05-16 20:16:28.039719] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.981 [2024-05-16 20:16:28.040508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e70 (107): Transport endpoint is not connected 00:17:40.981 [2024-05-16 20:16:28.041484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e70 (9): Bad file descriptor 00:17:40.981 [2024-05-16 20:16:28.042484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:40.981 [2024-05-16 20:16:28.042503] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.981 [2024-05-16 20:16:28.042534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:40.981 request: 00:17:40.981 { 00:17:40.981 "name": "TLSTEST", 00:17:40.981 "trtype": "tcp", 00:17:40.981 "traddr": "10.0.0.2", 00:17:40.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.981 "adrfam": "ipv4", 00:17:40.981 "trsvcid": "4420", 00:17:40.981 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:40.981 "psk": "/tmp/tmp.6IxzXeuE8q", 00:17:40.981 "method": "bdev_nvme_attach_controller", 00:17:40.981 "req_id": 1 00:17:40.981 } 00:17:40.981 Got JSON-RPC error response 00:17:40.981 response: 00:17:40.981 { 00:17:40.981 "code": -32602, 00:17:40.981 "message": "Invalid parameters" 00:17:40.981 } 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 236920 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 236920 ']' 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 236920 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 236920 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 236920' 00:17:40.981 killing process with pid 236920 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 236920 00:17:40.981 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.981 00:17:40.981 Latency(us) 00:17:40.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.981 =================================================================================================================== 00:17:40.981 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.981 [2024-05-16 20:16:28.083717] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.981 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 236920 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=237059 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 237059 /var/tmp/bdevperf.sock 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 237059 ']' 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:41.238 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.238 [2024-05-16 20:16:28.371761] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:41.238 [2024-05-16 20:16:28.371862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237059 ] 00:17:41.496 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.496 [2024-05-16 20:16:28.429440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.496 [2024-05-16 20:16:28.532070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.496 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:41.496 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:41.496 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:41.805 [2024-05-16 20:16:28.871874] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.805 [2024-05-16 20:16:28.873366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13437d0 (9): Bad file descriptor 00:17:41.805 [2024-05-16 20:16:28.874362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.805 [2024-05-16 20:16:28.874383] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.805 [2024-05-16 20:16:28.874413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.805 request: 00:17:41.805 { 00:17:41.805 "name": "TLSTEST", 00:17:41.805 "trtype": "tcp", 00:17:41.805 "traddr": "10.0.0.2", 00:17:41.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.805 "adrfam": "ipv4", 00:17:41.805 "trsvcid": "4420", 00:17:41.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.805 "method": "bdev_nvme_attach_controller", 00:17:41.805 "req_id": 1 00:17:41.805 } 00:17:41.805 Got JSON-RPC error response 00:17:41.805 response: 00:17:41.805 { 00:17:41.805 "code": -32602, 00:17:41.805 "message": "Invalid parameters" 00:17:41.805 } 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 237059 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 237059 ']' 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 237059 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 237059 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 237059' 00:17:41.805 killing process with pid 237059 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 237059 00:17:41.805 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.805 00:17:41.805 Latency(us) 00:17:41.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.805 =================================================================================================================== 00:17:41.805 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.805 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 237059 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 233549 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 233549 ']' 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 233549 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 233549 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 233549' 00:17:42.063 killing process with pid 233549 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 233549 00:17:42.063 [2024-05-16 20:16:29.206660] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:42.063 [2024-05-16 20:16:29.206718] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:42.063 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 233549 00:17:42.627 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:42.627 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:42.627 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:42.627 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:42.627 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:42.627 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.rGnbouWqRR 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.rGnbouWqRR 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=237210 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 237210 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 237210 ']' 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.628 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.628 [2024-05-16 20:16:29.580754] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:42.628 [2024-05-16 20:16:29.580832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.628 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.628 [2024-05-16 20:16:29.642587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.628 [2024-05-16 20:16:29.746839] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.628 [2024-05-16 20:16:29.746912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.628 [2024-05-16 20:16:29.746926] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.628 [2024-05-16 20:16:29.746952] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.628 [2024-05-16 20:16:29.746962] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.628 [2024-05-16 20:16:29.747003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.885 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:42.885 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:42.885 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.885 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.886 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.886 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.886 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.rGnbouWqRR 00:17:42.886 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rGnbouWqRR 00:17:42.886 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.143 [2024-05-16 20:16:30.110967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.143 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.401 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.659 [2024-05-16 20:16:30.620305] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:43.659 [2024-05-16 20:16:30.620443] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.659 [2024-05-16 20:16:30.620640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.659 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.917 malloc0 00:17:43.917 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.175 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:17:44.433 [2024-05-16 20:16:31.422889] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rGnbouWqRR 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rGnbouWqRR' 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=237377 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 237377 /var/tmp/bdevperf.sock 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 237377 ']' 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.433 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.433 [2024-05-16 20:16:31.487575] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:44.433 [2024-05-16 20:16:31.487644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237377 ] 00:17:44.433 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.433 [2024-05-16 20:16:31.550387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.691 [2024-05-16 20:16:31.659270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.691 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.691 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:44.691 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:17:44.949 [2024-05-16 20:16:32.038846] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.949 [2024-05-16 20:16:32.038989] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.207 TLSTESTn1 00:17:45.207 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.207 Running I/O for 10 seconds... 00:17:55.189 00:17:55.189 Latency(us) 00:17:55.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.189 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.189 Verification LBA range: start 0x0 length 0x2000 00:17:55.189 TLSTESTn1 : 10.02 3276.91 12.80 0.00 0.00 38984.67 6407.96 80002.47 00:17:55.189 =================================================================================================================== 00:17:55.189 Total : 3276.91 12.80 0.00 0.00 38984.67 6407.96 80002.47 00:17:55.189 0 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 237377 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 237377 ']' 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 237377 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 237377 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 237377' 00:17:55.189 killing process with pid 237377 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 237377 00:17:55.189 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.189 00:17:55.189 Latency(us) 00:17:55.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.189 =================================================================================================================== 00:17:55.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.189 [2024-05-16 20:16:42.319663] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.189 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 237377 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.rGnbouWqRR 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rGnbouWqRR 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rGnbouWqRR 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rGnbouWqRR 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rGnbouWqRR' 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=238692 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 238692 /var/tmp/bdevperf.sock 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 238692 ']' 00:17:55.447 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.448 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:55.448 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.448 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:55.448 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.706 [2024-05-16 20:16:42.625561] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:55.706 [2024-05-16 20:16:42.625639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238692 ] 00:17:55.706 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.706 [2024-05-16 20:16:42.682365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.706 [2024-05-16 20:16:42.785086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.964 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:55.964 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:55.964 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:17:56.222 [2024-05-16 20:16:43.134570] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.222 [2024-05-16 20:16:43.134648] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:56.222 [2024-05-16 20:16:43.134667] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.rGnbouWqRR 00:17:56.222 request: 00:17:56.222 { 00:17:56.222 "name": "TLSTEST", 00:17:56.222 "trtype": "tcp", 00:17:56.222 "traddr": "10.0.0.2", 00:17:56.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.222 "adrfam": "ipv4", 00:17:56.222 "trsvcid": "4420", 00:17:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.222 "psk": "/tmp/tmp.rGnbouWqRR", 00:17:56.222 "method": "bdev_nvme_attach_controller", 00:17:56.222 "req_id": 1 00:17:56.222 } 00:17:56.222 Got JSON-RPC error response 00:17:56.222 response: 00:17:56.222 { 00:17:56.222 "code": -1, 00:17:56.222 "message": "Operation not permitted" 00:17:56.222 } 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 238692 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 238692 ']' 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 238692 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 238692 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 238692' 00:17:56.222 killing process with pid 238692 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 238692 00:17:56.222 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.222 00:17:56.222 Latency(us) 00:17:56.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.222 =================================================================================================================== 00:17:56.222 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.222 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 238692 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 237210 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 237210 ']' 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 237210 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 237210 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 237210' 00:17:56.480 killing process with pid 237210 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 237210 00:17:56.480 [2024-05-16 20:16:43.428668] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:56.480 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 237210 00:17:56.480 [2024-05-16 20:16:43.428727] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=238839 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 238839 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 238839 ']' 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:56.738 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.738 [2024-05-16 20:16:43.764772] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:56.738 [2024-05-16 20:16:43.764883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.738 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.738 [2024-05-16 20:16:43.833562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.996 [2024-05-16 20:16:43.948100] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.996 [2024-05-16 20:16:43.948162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.996 [2024-05-16 20:16:43.948180] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.996 [2024-05-16 20:16:43.948194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.996 [2024-05-16 20:16:43.948206] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.996 [2024-05-16 20:16:43.948245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.562 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.562 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:57.563 20:16:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.563 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.563 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.rGnbouWqRR 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rGnbouWqRR 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.rGnbouWqRR 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rGnbouWqRR 00:17:57.821 20:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.821 [2024-05-16 20:16:44.953289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.078 20:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:58.078 20:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:58.644 [2024-05-16 20:16:45.482718] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:58.644 [2024-05-16 20:16:45.482821] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.644 [2024-05-16 20:16:45.483034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.644 20:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:58.644 malloc0 00:17:58.902 20:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:17:59.160 [2024-05-16 20:16:46.280768] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:59.160 [2024-05-16 20:16:46.280807] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:59.160 [2024-05-16 20:16:46.280846] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:59.160 request: 00:17:59.160 { 00:17:59.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.160 "host": "nqn.2016-06.io.spdk:host1", 00:17:59.160 "psk": "/tmp/tmp.rGnbouWqRR", 00:17:59.160 "method": "nvmf_subsystem_add_host", 00:17:59.160 "req_id": 1 00:17:59.160 } 00:17:59.160 Got JSON-RPC error response 00:17:59.160 response: 00:17:59.160 { 00:17:59.160 "code": -32603, 00:17:59.160 "message": "Internal error" 00:17:59.160 } 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 238839 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 238839 ']' 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 238839 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:59.160 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 238839 00:17:59.418 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:59.418 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:59.418 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 238839' 00:17:59.418 killing process with pid 238839 00:17:59.418 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 238839 00:17:59.418 [2024-05-16 20:16:46.324201] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:59.418 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 238839 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.rGnbouWqRR 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=239260 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 239260 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 239260 ']' 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.677 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.677 [2024-05-16 20:16:46.642524] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:59.677 [2024-05-16 20:16:46.642601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.677 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.677 [2024-05-16 20:16:46.708215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.935 [2024-05-16 20:16:46.827929] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.935 [2024-05-16 20:16:46.827991] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.935 [2024-05-16 20:16:46.828007] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.935 [2024-05-16 20:16:46.828021] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.935 [2024-05-16 20:16:46.828032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.935 [2024-05-16 20:16:46.828065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.rGnbouWqRR 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rGnbouWqRR 00:17:59.935 20:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.193 [2024-05-16 20:16:47.208250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.193 20:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:00.451 20:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:00.710 [2024-05-16 20:16:47.697516] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:00.710 [2024-05-16 20:16:47.697655] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.710 [2024-05-16 20:16:47.697874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.710 20:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:00.968 malloc0 00:18:00.968 20:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.226 20:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:18:01.484 [2024-05-16 20:16:48.475233] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=239430 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 239430 /var/tmp/bdevperf.sock 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 239430 ']' 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.484 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.484 [2024-05-16 20:16:48.535145] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:01.484 [2024-05-16 20:16:48.535224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239430 ] 00:18:01.484 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.484 [2024-05-16 20:16:48.592284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.741 [2024-05-16 20:16:48.703801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.741 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.741 20:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:01.741 20:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:18:01.999 [2024-05-16 20:16:49.081753] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.999 [2024-05-16 20:16:49.081890] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.256 TLSTESTn1 00:18:02.256 20:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:02.514 20:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:02.514 "subsystems": [ 00:18:02.514 { 00:18:02.514 "subsystem": "keyring", 00:18:02.514 "config": [] 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "subsystem": "iobuf", 00:18:02.514 "config": [ 00:18:02.514 { 00:18:02.514 "method": "iobuf_set_options", 00:18:02.514 "params": { 00:18:02.514 "small_pool_count": 8192, 00:18:02.514 "large_pool_count": 1024, 00:18:02.514 "small_bufsize": 8192, 00:18:02.514 "large_bufsize": 135168 00:18:02.514 } 00:18:02.514 } 00:18:02.514 ] 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "subsystem": "sock", 00:18:02.514 "config": [ 00:18:02.514 { 00:18:02.514 "method": "sock_set_default_impl", 00:18:02.514 "params": { 00:18:02.514 "impl_name": "posix" 00:18:02.514 } 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "method": "sock_impl_set_options", 00:18:02.514 "params": { 00:18:02.514 "impl_name": "ssl", 00:18:02.514 "recv_buf_size": 4096, 00:18:02.514 "send_buf_size": 4096, 00:18:02.514 "enable_recv_pipe": true, 00:18:02.514 "enable_quickack": false, 00:18:02.514 "enable_placement_id": 0, 00:18:02.514 "enable_zerocopy_send_server": true, 00:18:02.514 "enable_zerocopy_send_client": false, 00:18:02.514 "zerocopy_threshold": 0, 00:18:02.514 "tls_version": 0, 00:18:02.514 "enable_ktls": false 00:18:02.514 } 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "method": "sock_impl_set_options", 00:18:02.514 "params": { 00:18:02.514 "impl_name": "posix", 00:18:02.514 "recv_buf_size": 2097152, 00:18:02.514 "send_buf_size": 2097152, 00:18:02.514 "enable_recv_pipe": true, 00:18:02.514 "enable_quickack": false, 00:18:02.514 "enable_placement_id": 0, 00:18:02.514 "enable_zerocopy_send_server": true, 00:18:02.514 "enable_zerocopy_send_client": false, 00:18:02.514 "zerocopy_threshold": 0, 00:18:02.514 "tls_version": 0, 00:18:02.514 "enable_ktls": false 00:18:02.514 } 00:18:02.514 } 00:18:02.514 ] 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "subsystem": "vmd", 00:18:02.514 "config": [] 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "subsystem": "accel", 00:18:02.514 "config": [ 00:18:02.514 { 00:18:02.514 "method": "accel_set_options", 00:18:02.514 "params": { 00:18:02.514 "small_cache_size": 128, 00:18:02.514 "large_cache_size": 16, 00:18:02.514 "task_count": 2048, 00:18:02.514 "sequence_count": 2048, 00:18:02.514 "buf_count": 2048 00:18:02.514 } 00:18:02.514 } 00:18:02.514 ] 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "subsystem": "bdev", 00:18:02.514 "config": [ 00:18:02.514 { 00:18:02.514 "method": "bdev_set_options", 00:18:02.514 "params": { 00:18:02.514 "bdev_io_pool_size": 65535, 00:18:02.514 "bdev_io_cache_size": 256, 00:18:02.514 "bdev_auto_examine": true, 00:18:02.514 "iobuf_small_cache_size": 128, 00:18:02.514 "iobuf_large_cache_size": 16 00:18:02.514 } 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "method": "bdev_raid_set_options", 00:18:02.514 "params": { 00:18:02.514 "process_window_size_kb": 1024 00:18:02.514 } 00:18:02.514 }, 00:18:02.514 { 00:18:02.514 "method": "bdev_iscsi_set_options", 00:18:02.515 "params": { 00:18:02.515 "timeout_sec": 30 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "bdev_nvme_set_options", 00:18:02.515 "params": { 00:18:02.515 "action_on_timeout": "none", 00:18:02.515 "timeout_us": 0, 00:18:02.515 "timeout_admin_us": 0, 00:18:02.515 "keep_alive_timeout_ms": 10000, 00:18:02.515 "arbitration_burst": 0, 00:18:02.515 "low_priority_weight": 0, 00:18:02.515 "medium_priority_weight": 0, 00:18:02.515 "high_priority_weight": 0, 00:18:02.515 "nvme_adminq_poll_period_us": 10000, 00:18:02.515 "nvme_ioq_poll_period_us": 0, 00:18:02.515 "io_queue_requests": 0, 00:18:02.515 "delay_cmd_submit": true, 00:18:02.515 "transport_retry_count": 4, 00:18:02.515 "bdev_retry_count": 3, 00:18:02.515 "transport_ack_timeout": 0, 00:18:02.515 "ctrlr_loss_timeout_sec": 0, 00:18:02.515 "reconnect_delay_sec": 0, 00:18:02.515 "fast_io_fail_timeout_sec": 0, 00:18:02.515 "disable_auto_failback": false, 00:18:02.515 "generate_uuids": false, 00:18:02.515 "transport_tos": 0, 00:18:02.515 "nvme_error_stat": false, 00:18:02.515 "rdma_srq_size": 0, 00:18:02.515 "io_path_stat": false, 00:18:02.515 "allow_accel_sequence": false, 00:18:02.515 "rdma_max_cq_size": 0, 00:18:02.515 "rdma_cm_event_timeout_ms": 0, 00:18:02.515 "dhchap_digests": [ 00:18:02.515 "sha256", 00:18:02.515 "sha384", 00:18:02.515 "sha512" 00:18:02.515 ], 00:18:02.515 "dhchap_dhgroups": [ 00:18:02.515 "null", 00:18:02.515 "ffdhe2048", 00:18:02.515 "ffdhe3072", 00:18:02.515 "ffdhe4096", 00:18:02.515 "ffdhe6144", 00:18:02.515 "ffdhe8192" 00:18:02.515 ] 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "bdev_nvme_set_hotplug", 00:18:02.515 "params": { 00:18:02.515 "period_us": 100000, 00:18:02.515 "enable": false 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "bdev_malloc_create", 00:18:02.515 "params": { 00:18:02.515 "name": "malloc0", 00:18:02.515 "num_blocks": 8192, 00:18:02.515 "block_size": 4096, 00:18:02.515 "physical_block_size": 4096, 00:18:02.515 "uuid": "b86916e9-b408-45d9-91d2-6e8762d7ff2b", 00:18:02.515 "optimal_io_boundary": 0 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "bdev_wait_for_examine" 00:18:02.515 } 00:18:02.515 ] 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "subsystem": "nbd", 00:18:02.515 "config": [] 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "subsystem": "scheduler", 00:18:02.515 "config": [ 00:18:02.515 { 00:18:02.515 "method": "framework_set_scheduler", 00:18:02.515 "params": { 00:18:02.515 "name": "static" 00:18:02.515 } 00:18:02.515 } 00:18:02.515 ] 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "subsystem": "nvmf", 00:18:02.515 "config": [ 00:18:02.515 { 00:18:02.515 "method": "nvmf_set_config", 00:18:02.515 "params": { 00:18:02.515 "discovery_filter": "match_any", 00:18:02.515 "admin_cmd_passthru": { 00:18:02.515 "identify_ctrlr": false 00:18:02.515 } 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_set_max_subsystems", 00:18:02.515 "params": { 00:18:02.515 "max_subsystems": 1024 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_set_crdt", 00:18:02.515 "params": { 00:18:02.515 "crdt1": 0, 00:18:02.515 "crdt2": 0, 00:18:02.515 "crdt3": 0 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_create_transport", 00:18:02.515 "params": { 00:18:02.515 "trtype": "TCP", 00:18:02.515 "max_queue_depth": 128, 00:18:02.515 "max_io_qpairs_per_ctrlr": 127, 00:18:02.515 "in_capsule_data_size": 4096, 00:18:02.515 "max_io_size": 131072, 00:18:02.515 "io_unit_size": 131072, 00:18:02.515 "max_aq_depth": 128, 00:18:02.515 "num_shared_buffers": 511, 00:18:02.515 "buf_cache_size": 4294967295, 00:18:02.515 "dif_insert_or_strip": false, 00:18:02.515 "zcopy": false, 00:18:02.515 "c2h_success": false, 00:18:02.515 "sock_priority": 0, 00:18:02.515 "abort_timeout_sec": 1, 00:18:02.515 "ack_timeout": 0, 00:18:02.515 "data_wr_pool_size": 0 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_create_subsystem", 00:18:02.515 "params": { 00:18:02.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.515 "allow_any_host": false, 00:18:02.515 "serial_number": "SPDK00000000000001", 00:18:02.515 "model_number": "SPDK bdev Controller", 00:18:02.515 "max_namespaces": 10, 00:18:02.515 "min_cntlid": 1, 00:18:02.515 "max_cntlid": 65519, 00:18:02.515 "ana_reporting": false 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_subsystem_add_host", 00:18:02.515 "params": { 00:18:02.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.515 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.515 "psk": "/tmp/tmp.rGnbouWqRR" 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_subsystem_add_ns", 00:18:02.515 "params": { 00:18:02.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.515 "namespace": { 00:18:02.515 "nsid": 1, 00:18:02.515 "bdev_name": "malloc0", 00:18:02.515 "nguid": "B86916E9B40845D991D26E8762D7FF2B", 00:18:02.515 "uuid": "b86916e9-b408-45d9-91d2-6e8762d7ff2b", 00:18:02.515 "no_auto_visible": false 00:18:02.515 } 00:18:02.515 } 00:18:02.515 }, 00:18:02.515 { 00:18:02.515 "method": "nvmf_subsystem_add_listener", 00:18:02.515 "params": { 00:18:02.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.515 "listen_address": { 00:18:02.515 "trtype": "TCP", 00:18:02.515 "adrfam": "IPv4", 00:18:02.515 "traddr": "10.0.0.2", 00:18:02.515 "trsvcid": "4420" 00:18:02.515 }, 00:18:02.515 "secure_channel": true 00:18:02.515 } 00:18:02.515 } 00:18:02.515 ] 00:18:02.515 } 00:18:02.515 ] 00:18:02.515 }' 00:18:02.515 20:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:02.773 20:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:02.773 "subsystems": [ 00:18:02.773 { 00:18:02.773 "subsystem": "keyring", 00:18:02.773 "config": [] 00:18:02.773 }, 00:18:02.773 { 00:18:02.773 "subsystem": "iobuf", 00:18:02.773 "config": [ 00:18:02.773 { 00:18:02.773 "method": "iobuf_set_options", 00:18:02.773 "params": { 00:18:02.773 "small_pool_count": 8192, 00:18:02.773 "large_pool_count": 1024, 00:18:02.773 "small_bufsize": 8192, 00:18:02.773 "large_bufsize": 135168 00:18:02.773 } 00:18:02.773 } 00:18:02.773 ] 00:18:02.773 }, 00:18:02.773 { 00:18:02.773 "subsystem": "sock", 00:18:02.773 "config": [ 00:18:02.773 { 00:18:02.773 "method": "sock_set_default_impl", 00:18:02.773 "params": { 00:18:02.773 "impl_name": "posix" 00:18:02.773 } 00:18:02.773 }, 00:18:02.773 { 00:18:02.773 "method": "sock_impl_set_options", 00:18:02.773 "params": { 00:18:02.773 "impl_name": "ssl", 00:18:02.773 "recv_buf_size": 4096, 00:18:02.773 "send_buf_size": 4096, 00:18:02.773 "enable_recv_pipe": true, 00:18:02.773 "enable_quickack": false, 00:18:02.773 "enable_placement_id": 0, 00:18:02.773 "enable_zerocopy_send_server": true, 00:18:02.774 "enable_zerocopy_send_client": false, 00:18:02.774 "zerocopy_threshold": 0, 00:18:02.774 "tls_version": 0, 00:18:02.774 "enable_ktls": false 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "sock_impl_set_options", 00:18:02.774 "params": { 00:18:02.774 "impl_name": "posix", 00:18:02.774 "recv_buf_size": 2097152, 00:18:02.774 "send_buf_size": 2097152, 00:18:02.774 "enable_recv_pipe": true, 00:18:02.774 "enable_quickack": false, 00:18:02.774 "enable_placement_id": 0, 00:18:02.774 "enable_zerocopy_send_server": true, 00:18:02.774 "enable_zerocopy_send_client": false, 00:18:02.774 "zerocopy_threshold": 0, 00:18:02.774 "tls_version": 0, 00:18:02.774 "enable_ktls": false 00:18:02.774 } 00:18:02.774 } 00:18:02.774 ] 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "subsystem": "vmd", 00:18:02.774 "config": [] 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "subsystem": "accel", 00:18:02.774 "config": [ 00:18:02.774 { 00:18:02.774 "method": "accel_set_options", 00:18:02.774 "params": { 00:18:02.774 "small_cache_size": 128, 00:18:02.774 "large_cache_size": 16, 00:18:02.774 "task_count": 2048, 00:18:02.774 "sequence_count": 2048, 00:18:02.774 "buf_count": 2048 00:18:02.774 } 00:18:02.774 } 00:18:02.774 ] 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "subsystem": "bdev", 00:18:02.774 "config": [ 00:18:02.774 { 00:18:02.774 "method": "bdev_set_options", 00:18:02.774 "params": { 00:18:02.774 "bdev_io_pool_size": 65535, 00:18:02.774 "bdev_io_cache_size": 256, 00:18:02.774 "bdev_auto_examine": true, 00:18:02.774 "iobuf_small_cache_size": 128, 00:18:02.774 "iobuf_large_cache_size": 16 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "bdev_raid_set_options", 00:18:02.774 "params": { 00:18:02.774 "process_window_size_kb": 1024 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "bdev_iscsi_set_options", 00:18:02.774 "params": { 00:18:02.774 "timeout_sec": 30 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "bdev_nvme_set_options", 00:18:02.774 "params": { 00:18:02.774 "action_on_timeout": "none", 00:18:02.774 "timeout_us": 0, 00:18:02.774 "timeout_admin_us": 0, 00:18:02.774 "keep_alive_timeout_ms": 10000, 00:18:02.774 "arbitration_burst": 0, 00:18:02.774 "low_priority_weight": 0, 00:18:02.774 "medium_priority_weight": 0, 00:18:02.774 "high_priority_weight": 0, 00:18:02.774 "nvme_adminq_poll_period_us": 10000, 00:18:02.774 "nvme_ioq_poll_period_us": 0, 00:18:02.774 "io_queue_requests": 512, 00:18:02.774 "delay_cmd_submit": true, 00:18:02.774 "transport_retry_count": 4, 00:18:02.774 "bdev_retry_count": 3, 00:18:02.774 "transport_ack_timeout": 0, 00:18:02.774 "ctrlr_loss_timeout_sec": 0, 00:18:02.774 "reconnect_delay_sec": 0, 00:18:02.774 "fast_io_fail_timeout_sec": 0, 00:18:02.774 "disable_auto_failback": false, 00:18:02.774 "generate_uuids": false, 00:18:02.774 "transport_tos": 0, 00:18:02.774 "nvme_error_stat": false, 00:18:02.774 "rdma_srq_size": 0, 00:18:02.774 "io_path_stat": false, 00:18:02.774 "allow_accel_sequence": false, 00:18:02.774 "rdma_max_cq_size": 0, 00:18:02.774 "rdma_cm_event_timeout_ms": 0, 00:18:02.774 "dhchap_digests": [ 00:18:02.774 "sha256", 00:18:02.774 "sha384", 00:18:02.774 "sha512" 00:18:02.774 ], 00:18:02.774 "dhchap_dhgroups": [ 00:18:02.774 "null", 00:18:02.774 "ffdhe2048", 00:18:02.774 "ffdhe3072", 00:18:02.774 "ffdhe4096", 00:18:02.774 "ffdhe6144", 00:18:02.774 "ffdhe8192" 00:18:02.774 ] 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "bdev_nvme_attach_controller", 00:18:02.774 "params": { 00:18:02.774 "name": "TLSTEST", 00:18:02.774 "trtype": "TCP", 00:18:02.774 "adrfam": "IPv4", 00:18:02.774 "traddr": "10.0.0.2", 00:18:02.774 "trsvcid": "4420", 00:18:02.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.774 "prchk_reftag": false, 00:18:02.774 "prchk_guard": false, 00:18:02.774 "ctrlr_loss_timeout_sec": 0, 00:18:02.774 "reconnect_delay_sec": 0, 00:18:02.774 "fast_io_fail_timeout_sec": 0, 00:18:02.774 "psk": "/tmp/tmp.rGnbouWqRR", 00:18:02.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.774 "hdgst": false, 00:18:02.774 "ddgst": false 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "bdev_nvme_set_hotplug", 00:18:02.774 "params": { 00:18:02.774 "period_us": 100000, 00:18:02.774 "enable": false 00:18:02.774 } 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "method": "bdev_wait_for_examine" 00:18:02.774 } 00:18:02.774 ] 00:18:02.774 }, 00:18:02.774 { 00:18:02.774 "subsystem": "nbd", 00:18:02.774 "config": [] 00:18:02.774 } 00:18:02.774 ] 00:18:02.774 }' 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 239430 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 239430 ']' 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 239430 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239430 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239430' 00:18:02.774 killing process with pid 239430 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 239430 00:18:02.774 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.774 00:18:02.774 Latency(us) 00:18:02.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.774 =================================================================================================================== 00:18:02.774 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.774 [2024-05-16 20:16:49.891685] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:02.774 20:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 239430 00:18:03.032 20:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 239260 00:18:03.032 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 239260 ']' 00:18:03.032 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 239260 00:18:03.032 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:03.032 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.032 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239260 00:18:03.290 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:03.290 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:03.290 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239260' 00:18:03.290 killing process with pid 239260 00:18:03.290 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 239260 00:18:03.290 [2024-05-16 20:16:50.191962] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:03.290 [2024-05-16 20:16:50.192019] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:03.290 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 239260 00:18:03.550 20:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:03.550 20:16:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.550 20:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:03.550 "subsystems": [ 00:18:03.550 { 00:18:03.550 "subsystem": "keyring", 00:18:03.550 "config": [] 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "subsystem": "iobuf", 00:18:03.550 "config": [ 00:18:03.550 { 00:18:03.550 "method": "iobuf_set_options", 00:18:03.550 "params": { 00:18:03.550 "small_pool_count": 8192, 00:18:03.550 "large_pool_count": 1024, 00:18:03.550 "small_bufsize": 8192, 00:18:03.550 "large_bufsize": 135168 00:18:03.550 } 00:18:03.550 } 00:18:03.550 ] 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "subsystem": "sock", 00:18:03.550 "config": [ 00:18:03.550 { 00:18:03.550 "method": "sock_set_default_impl", 00:18:03.550 "params": { 00:18:03.550 "impl_name": "posix" 00:18:03.550 } 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "method": "sock_impl_set_options", 00:18:03.550 "params": { 00:18:03.550 "impl_name": "ssl", 00:18:03.550 "recv_buf_size": 4096, 00:18:03.550 "send_buf_size": 4096, 00:18:03.550 "enable_recv_pipe": true, 00:18:03.550 "enable_quickack": false, 00:18:03.550 "enable_placement_id": 0, 00:18:03.550 "enable_zerocopy_send_server": true, 00:18:03.550 "enable_zerocopy_send_client": false, 00:18:03.550 "zerocopy_threshold": 0, 00:18:03.550 "tls_version": 0, 00:18:03.550 "enable_ktls": false 00:18:03.550 } 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "method": "sock_impl_set_options", 00:18:03.550 "params": { 00:18:03.550 "impl_name": "posix", 00:18:03.550 "recv_buf_size": 2097152, 00:18:03.550 "send_buf_size": 2097152, 00:18:03.550 "enable_recv_pipe": true, 00:18:03.550 "enable_quickack": false, 00:18:03.550 "enable_placement_id": 0, 00:18:03.550 "enable_zerocopy_send_server": true, 00:18:03.550 "enable_zerocopy_send_client": false, 00:18:03.550 "zerocopy_threshold": 0, 00:18:03.550 "tls_version": 0, 00:18:03.550 "enable_ktls": false 00:18:03.550 } 00:18:03.550 } 00:18:03.550 ] 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "subsystem": "vmd", 00:18:03.550 "config": [] 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "subsystem": "accel", 00:18:03.550 "config": [ 00:18:03.550 { 00:18:03.550 "method": "accel_set_options", 00:18:03.550 "params": { 00:18:03.550 "small_cache_size": 128, 00:18:03.550 "large_cache_size": 16, 00:18:03.550 "task_count": 2048, 00:18:03.550 "sequence_count": 2048, 00:18:03.550 "buf_count": 2048 00:18:03.550 } 00:18:03.550 } 00:18:03.550 ] 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "subsystem": "bdev", 00:18:03.550 "config": [ 00:18:03.550 { 00:18:03.550 "method": "bdev_set_options", 00:18:03.550 "params": { 00:18:03.550 "bdev_io_pool_size": 65535, 00:18:03.550 "bdev_io_cache_size": 256, 00:18:03.550 "bdev_auto_examine": true, 00:18:03.550 "iobuf_small_cache_size": 128, 00:18:03.550 "iobuf_large_cache_size": 16 00:18:03.550 } 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "method": "bdev_raid_set_options", 00:18:03.550 "params": { 00:18:03.550 "process_window_size_kb": 1024 00:18:03.550 } 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "method": "bdev_iscsi_set_options", 00:18:03.550 "params": { 00:18:03.550 "timeout_sec": 30 00:18:03.550 } 00:18:03.550 }, 00:18:03.550 { 00:18:03.550 "method": "bdev_nvme_set_options", 00:18:03.550 "params": { 00:18:03.550 "action_on_timeout": "none", 00:18:03.550 "timeout_us": 0, 00:18:03.550 "timeout_admin_us": 0, 00:18:03.550 "keep_alive_timeout_ms": 10000, 00:18:03.550 "arbitration_burst": 0, 00:18:03.550 "low_priority_weight": 0, 00:18:03.550 "medium_priority_weight": 0, 00:18:03.550 "high_priority_weight": 0, 00:18:03.550 "nvme_adminq_poll_period_us": 10000, 00:18:03.550 "nvme_ioq_poll_period_us": 0, 00:18:03.550 "io_queue_requests": 0, 00:18:03.550 "delay_cmd_submit": true, 00:18:03.550 "transport_retry_count": 4, 00:18:03.550 "bdev_retry_count": 3, 00:18:03.550 "transport_ack_timeout": 0, 00:18:03.550 "ctrlr_loss_timeout_sec": 0, 00:18:03.550 "reconnect_delay_sec": 0, 00:18:03.550 "fast_io_fail_timeout_sec": 0, 00:18:03.550 "disable_auto_failback": false, 00:18:03.550 "generate_uuids": false, 00:18:03.550 "transport_tos": 0, 00:18:03.550 "nvme_error_stat": false, 00:18:03.550 "rdma_srq_size": 0, 00:18:03.550 "io_path_stat": false, 00:18:03.550 "allow_accel_sequence": false, 00:18:03.550 "rdma_max_cq_size": 0, 00:18:03.550 "rdma_cm_event_timeout_ms": 0, 00:18:03.550 "dhchap_digests": [ 00:18:03.550 "sha256", 00:18:03.550 "sha384", 00:18:03.550 "sha512" 00:18:03.550 ], 00:18:03.550 "dhchap_dhgroups": [ 00:18:03.550 "null", 00:18:03.550 "ffdhe2048", 00:18:03.550 "ffdhe3072", 00:18:03.550 "ffdhe4096", 00:18:03.550 "ffdhe6144", 00:18:03.550 "ffdhe8192" 00:18:03.550 ] 00:18:03.550 } 00:18:03.550 }, 00:18:03.550 { 00:18:03.551 "method": "bdev_nvme_set_hotplug", 00:18:03.551 "params": { 00:18:03.551 "period_us": 100000, 00:18:03.551 "enable": false 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "bdev_malloc_create", 00:18:03.551 "params": { 00:18:03.551 "name": "malloc0", 00:18:03.551 "num_blocks": 8192, 00:18:03.551 "block_size": 4096, 00:18:03.551 "physical_block_size": 4096, 00:18:03.551 "uuid": "b86916e9-b408-45d9-91d2-6e8762d7ff2b", 00:18:03.551 "optimal_io_boundary": 0 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "bdev_wait_for_examine" 00:18:03.551 } 00:18:03.551 ] 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "subsystem": "nbd", 00:18:03.551 "config": [] 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "subsystem": "scheduler", 00:18:03.551 "config": [ 00:18:03.551 { 00:18:03.551 "method": "framework_set_scheduler", 00:18:03.551 "params": { 00:18:03.551 "name": "static" 00:18:03.551 } 00:18:03.551 } 00:18:03.551 ] 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "subsystem": "nvmf", 00:18:03.551 "config": [ 00:18:03.551 { 00:18:03.551 "method": "nvmf_set_config", 00:18:03.551 "params": { 00:18:03.551 "discovery_filter": "match_any", 00:18:03.551 "admin_cmd_passthru": { 00:18:03.551 "identify_ctrlr": false 00:18:03.551 } 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_set_max_subsystems", 00:18:03.551 "params": { 00:18:03.551 "max_subsystems": 1024 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_set_crdt", 00:18:03.551 "params": { 00:18:03.551 "crdt1": 0, 00:18:03.551 "crdt2": 0, 00:18:03.551 "crdt3": 0 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_create_transport", 00:18:03.551 "params": { 00:18:03.551 "trtype": "TCP", 00:18:03.551 "max_queue_depth": 128, 00:18:03.551 "max_io_qpairs_per_ctrlr": 127, 00:18:03.551 "in_capsule_data_size": 4096, 00:18:03.551 "max_io_size": 131072, 00:18:03.551 "io_unit_size": 131072, 00:18:03.551 "max_aq_depth": 128, 00:18:03.551 "num_shared_buffers": 511, 00:18:03.551 "buf_cache_size": 4294967295, 00:18:03.551 "dif_insert_or_strip": false, 00:18:03.551 "zcopy": false, 00:18:03.551 "c2h_success": false, 00:18:03.551 "sock_priority": 0, 00:18:03.551 "abort_timeout_sec": 1, 00:18:03.551 "ack_timeout": 0, 00:18:03.551 "data_wr_pool_size": 0 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_create_subsystem", 00:18:03.551 "params": { 00:18:03.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.551 "allow_any_host": false, 00:18:03.551 "serial_number": "SPDK00000000000001", 00:18:03.551 "model_number": "SPDK bdev Controller", 00:18:03.551 "max_namespaces": 10, 00:18:03.551 "min_cntlid": 1, 00:18:03.551 "max_cntlid": 65519, 00:18:03.551 "ana_reporting": false 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_subsystem_add_host", 00:18:03.551 "params": { 00:18:03.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.551 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.551 "psk": "/tmp/tmp.rGnbouWqRR" 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_subsystem_add_ns", 00:18:03.551 "params": { 00:18:03.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.551 "namespace": { 00:18:03.551 "nsid": 1, 00:18:03.551 "bdev_name": "malloc0", 00:18:03.551 "nguid": "B86916E9B40845D991D26E8762D7FF2B", 00:18:03.551 "uuid": "b86916e9-b408-45d9-91d2-6e8762d7ff2b", 00:18:03.551 "no_auto_visible": false 00:18:03.551 } 00:18:03.551 } 00:18:03.551 }, 00:18:03.551 { 00:18:03.551 "method": "nvmf_subsystem_add_listener", 00:18:03.551 "params": { 00:18:03.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.551 "listen_address": { 00:18:03.551 "trtype": "TCP", 00:18:03.551 "adrfam": "IPv4", 00:18:03.551 "traddr": "10.0.0.2", 00:18:03.551 "trsvcid": "4420" 00:18:03.551 }, 00:18:03.551 "secure_channel": true 00:18:03.551 } 00:18:03.551 } 00:18:03.551 ] 00:18:03.551 } 00:18:03.551 ] 00:18:03.551 }' 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=239707 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 239707 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 239707 ']' 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:03.551 20:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.551 [2024-05-16 20:16:50.544462] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:03.551 [2024-05-16 20:16:50.544550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.551 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.551 [2024-05-16 20:16:50.614180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.809 [2024-05-16 20:16:50.729246] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.809 [2024-05-16 20:16:50.729308] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.809 [2024-05-16 20:16:50.729323] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.809 [2024-05-16 20:16:50.729336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.809 [2024-05-16 20:16:50.729355] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.809 [2024-05-16 20:16:50.729442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.067 [2024-05-16 20:16:50.967376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.067 [2024-05-16 20:16:50.983279] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:04.067 [2024-05-16 20:16:50.999291] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:04.067 [2024-05-16 20:16:50.999360] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.067 [2024-05-16 20:16:51.018052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=239854 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 239854 /var/tmp/bdevperf.sock 00:18:04.632 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 239854 ']' 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:04.633 "subsystems": [ 00:18:04.633 { 00:18:04.633 "subsystem": "keyring", 00:18:04.633 "config": [] 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "subsystem": "iobuf", 00:18:04.633 "config": [ 00:18:04.633 { 00:18:04.633 "method": "iobuf_set_options", 00:18:04.633 "params": { 00:18:04.633 "small_pool_count": 8192, 00:18:04.633 "large_pool_count": 1024, 00:18:04.633 "small_bufsize": 8192, 00:18:04.633 "large_bufsize": 135168 00:18:04.633 } 00:18:04.633 } 00:18:04.633 ] 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "subsystem": "sock", 00:18:04.633 "config": [ 00:18:04.633 { 00:18:04.633 "method": "sock_set_default_impl", 00:18:04.633 "params": { 00:18:04.633 "impl_name": "posix" 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "sock_impl_set_options", 00:18:04.633 "params": { 00:18:04.633 "impl_name": "ssl", 00:18:04.633 "recv_buf_size": 4096, 00:18:04.633 "send_buf_size": 4096, 00:18:04.633 "enable_recv_pipe": true, 00:18:04.633 "enable_quickack": false, 00:18:04.633 "enable_placement_id": 0, 00:18:04.633 "enable_zerocopy_send_server": true, 00:18:04.633 "enable_zerocopy_send_client": false, 00:18:04.633 "zerocopy_threshold": 0, 00:18:04.633 "tls_version": 0, 00:18:04.633 "enable_ktls": false 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "sock_impl_set_options", 00:18:04.633 "params": { 00:18:04.633 "impl_name": "posix", 00:18:04.633 "recv_buf_size": 2097152, 00:18:04.633 "send_buf_size": 2097152, 00:18:04.633 "enable_recv_pipe": true, 00:18:04.633 "enable_quickack": false, 00:18:04.633 "enable_placement_id": 0, 00:18:04.633 "enable_zerocopy_send_server": true, 00:18:04.633 "enable_zerocopy_send_client": false, 00:18:04.633 "zerocopy_threshold": 0, 00:18:04.633 "tls_version": 0, 00:18:04.633 "enable_ktls": false 00:18:04.633 } 00:18:04.633 } 00:18:04.633 ] 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "subsystem": "vmd", 00:18:04.633 "config": [] 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "subsystem": "accel", 00:18:04.633 "config": [ 00:18:04.633 { 00:18:04.633 "method": "accel_set_options", 00:18:04.633 "params": { 00:18:04.633 "small_cache_size": 128, 00:18:04.633 "large_cache_size": 16, 00:18:04.633 "task_count": 2048, 00:18:04.633 "sequence_count": 2048, 00:18:04.633 "buf_count": 2048 00:18:04.633 } 00:18:04.633 } 00:18:04.633 ] 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "subsystem": "bdev", 00:18:04.633 "config": [ 00:18:04.633 { 00:18:04.633 "method": "bdev_set_options", 00:18:04.633 "params": { 00:18:04.633 "bdev_io_pool_size": 65535, 00:18:04.633 "bdev_io_cache_size": 256, 00:18:04.633 "bdev_auto_examine": true, 00:18:04.633 "iobuf_small_cache_size": 128, 00:18:04.633 "iobuf_large_cache_size": 16 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "bdev_raid_set_options", 00:18:04.633 "params": { 00:18:04.633 "process_window_size_kb": 1024 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "bdev_iscsi_set_options", 00:18:04.633 "params": { 00:18:04.633 "timeout_sec": 30 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "bdev_nvme_set_options", 00:18:04.633 "params": { 00:18:04.633 "action_on_timeout": "none", 00:18:04.633 "timeout_us": 0, 00:18:04.633 "timeout_admin_us": 0, 00:18:04.633 "keep_alive_timeout_ms": 10000, 00:18:04.633 "arbitration_burst": 0, 00:18:04.633 "low_priority_weight": 0, 00:18:04.633 "medium_priority_weight": 0, 00:18:04.633 "high_priority_weight": 0, 00:18:04.633 "nvme_adminq_poll_period_us": 10000, 00:18:04.633 "nvme_ioq_poll_period_us": 0, 00:18:04.633 "io_queue_requests": 512, 00:18:04.633 "delay_cmd_submit": true, 00:18:04.633 "transport_retry_count": 4, 00:18:04.633 "bdev_retry_count": 3, 00:18:04.633 "transport_ack_timeout": 0, 00:18:04.633 "ctrlr_loss_timeout_sec": 0, 00:18:04.633 "reconnect_delay_sec": 0, 00:18:04.633 "fast_io_fail_timeout_sec": 0, 00:18:04.633 "disable_auto_failback": false, 00:18:04.633 "generate_uuids": false, 00:18:04.633 "transport_tos": 0, 00:18:04.633 "nvme_error_stat": false, 00:18:04.633 "rdma_srq_size": 0, 00:18:04.633 "io_path_stat": false, 00:18:04.633 "allow_accel_sequence": false, 00:18:04.633 "rdma_max_cq_size": 0, 00:18:04.633 "rdma_cm_event_timeout_ms": 0, 00:18:04.633 "dhchap_digests": [ 00:18:04.633 "sha256", 00:18:04.633 "sha384", 00:18:04.633 "sha512" 00:18:04.633 ], 00:18:04.633 "dhchap_dhgroups": [ 00:18:04.633 "null", 00:18:04.633 "ffdhe2048", 00:18:04.633 "ffdhe3072", 00:18:04.633 "ffdhe4096", 00:18:04.633 "ffdhe6144", 00:18:04.633 "ffdhe8192" 00:18:04.633 ] 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "bdev_nvme_attach_controller", 00:18:04.633 "params": { 00:18:04.633 "name": "TLSTEST", 00:18:04.633 "trtype": "TCP", 00:18:04.633 "adrfam": "IPv4", 00:18:04.633 "traddr": "10.0.0.2", 00:18:04.633 "trsvcid": "4420", 00:18:04.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.633 "prchk_reftag": false, 00:18:04.633 "prchk_guard": false, 00:18:04.633 "ctrlr_loss_timeout_sec": 0, 00:18:04.633 "reconnect_delay_sec": 0, 00:18:04.633 "fast_io_fail_timeout_sec": 0, 00:18:04.633 "psk": "/tmp/tmp.rGnbouWqRR", 00:18:04.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.633 "hdgst": false, 00:18:04.633 "ddgst": false 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "bdev_nvme_set_hotplug", 00:18:04.633 "params": { 00:18:04.633 "period_us": 100000, 00:18:04.633 "enable": false 00:18:04.633 } 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "method": "bdev_wait_for_examine" 00:18:04.633 } 00:18:04.633 ] 00:18:04.633 }, 00:18:04.633 { 00:18:04.633 "subsystem": "nbd", 00:18:04.633 "config": [] 00:18:04.633 } 00:18:04.633 ] 00:18:04.633 }' 00:18:04.633 20:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.633 [2024-05-16 20:16:51.529015] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:04.633 [2024-05-16 20:16:51.529089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239854 ] 00:18:04.633 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.633 [2024-05-16 20:16:51.592399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.633 [2024-05-16 20:16:51.700496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.891 [2024-05-16 20:16:51.869752] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.891 [2024-05-16 20:16:51.869927] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:05.457 20:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.457 20:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:05.457 20:16:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:05.714 Running I/O for 10 seconds... 00:18:15.679 00:18:15.679 Latency(us) 00:18:15.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.679 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.679 Verification LBA range: start 0x0 length 0x2000 00:18:15.679 TLSTESTn1 : 10.04 3055.71 11.94 0.00 0.00 41777.06 6165.24 41360.50 00:18:15.679 =================================================================================================================== 00:18:15.679 Total : 3055.71 11.94 0.00 0.00 41777.06 6165.24 41360.50 00:18:15.679 0 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 239854 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 239854 ']' 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 239854 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239854 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239854' 00:18:15.679 killing process with pid 239854 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 239854 00:18:15.679 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.679 00:18:15.679 Latency(us) 00:18:15.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.679 =================================================================================================================== 00:18:15.679 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.679 [2024-05-16 20:17:02.765529] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.679 20:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 239854 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 239707 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 239707 ']' 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 239707 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239707 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239707' 00:18:15.936 killing process with pid 239707 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 239707 00:18:15.936 [2024-05-16 20:17:03.061341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:15.936 [2024-05-16 20:17:03.061407] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:15.936 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 239707 00:18:16.193 20:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:16.193 20:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.193 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:16.193 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=241192 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 241192 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 241192 ']' 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:16.451 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.451 [2024-05-16 20:17:03.386485] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:16.451 [2024-05-16 20:17:03.386561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.451 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.451 [2024-05-16 20:17:03.448557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.451 [2024-05-16 20:17:03.556010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.451 [2024-05-16 20:17:03.556065] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.451 [2024-05-16 20:17:03.556093] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.451 [2024-05-16 20:17:03.556105] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.451 [2024-05-16 20:17:03.556115] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.451 [2024-05-16 20:17:03.556157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.rGnbouWqRR 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rGnbouWqRR 00:18:16.709 20:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.966 [2024-05-16 20:17:03.975198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.966 20:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.223 20:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:17.481 [2024-05-16 20:17:04.524672] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:17.481 [2024-05-16 20:17:04.524762] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.481 [2024-05-16 20:17:04.524971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.481 20:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.739 malloc0 00:18:17.739 20:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.004 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rGnbouWqRR 00:18:18.261 [2024-05-16 20:17:05.378508] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=241472 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 241472 /var/tmp/bdevperf.sock 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 241472 ']' 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:18.261 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.519 [2024-05-16 20:17:05.442862] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:18.519 [2024-05-16 20:17:05.442933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241472 ] 00:18:18.519 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.519 [2024-05-16 20:17:05.506341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.519 [2024-05-16 20:17:05.623830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.777 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:18.777 20:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:18.777 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rGnbouWqRR 00:18:19.034 20:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:19.291 [2024-05-16 20:17:06.256074] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.291 nvme0n1 00:18:19.291 20:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.549 Running I/O for 1 seconds... 00:18:20.483 00:18:20.483 Latency(us) 00:18:20.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.483 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:20.483 Verification LBA range: start 0x0 length 0x2000 00:18:20.483 nvme0n1 : 1.03 3144.27 12.28 0.00 0.00 40139.63 8107.05 48739.37 00:18:20.483 =================================================================================================================== 00:18:20.483 Total : 3144.27 12.28 0.00 0.00 40139.63 8107.05 48739.37 00:18:20.483 0 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 241472 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 241472 ']' 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 241472 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 241472 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 241472' 00:18:20.483 killing process with pid 241472 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 241472 00:18:20.483 Received shutdown signal, test time was about 1.000000 seconds 00:18:20.483 00:18:20.483 Latency(us) 00:18:20.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.483 =================================================================================================================== 00:18:20.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.483 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 241472 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 241192 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 241192 ']' 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 241192 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 241192 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 241192' 00:18:20.741 killing process with pid 241192 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 241192 00:18:20.741 [2024-05-16 20:17:07.822769] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:20.741 20:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 241192 00:18:20.741 [2024-05-16 20:17:07.822828] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=241871 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 241871 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 241871 ']' 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.999 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.999 [2024-05-16 20:17:08.141230] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:20.999 [2024-05-16 20:17:08.141303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.257 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.257 [2024-05-16 20:17:08.206793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.257 [2024-05-16 20:17:08.322628] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.258 [2024-05-16 20:17:08.322691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.258 [2024-05-16 20:17:08.322707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.258 [2024-05-16 20:17:08.322721] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.258 [2024-05-16 20:17:08.322733] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.258 [2024-05-16 20:17:08.322765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.516 [2024-05-16 20:17:08.463793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.516 malloc0 00:18:21.516 [2024-05-16 20:17:08.494906] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:21.516 [2024-05-16 20:17:08.494984] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.516 [2024-05-16 20:17:08.495178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=241898 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 241898 /var/tmp/bdevperf.sock 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 241898 ']' 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:21.516 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.516 [2024-05-16 20:17:08.569339] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:21.516 [2024-05-16 20:17:08.569417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241898 ] 00:18:21.516 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.516 [2024-05-16 20:17:08.634622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.774 [2024-05-16 20:17:08.748116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.774 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:21.774 20:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:21.774 20:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rGnbouWqRR 00:18:22.031 20:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:22.289 [2024-05-16 20:17:09.340360] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.289 nvme0n1 00:18:22.289 20:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:22.547 Running I/O for 1 seconds... 00:18:23.479 00:18:23.479 Latency(us) 00:18:23.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.479 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:23.479 Verification LBA range: start 0x0 length 0x2000 00:18:23.479 nvme0n1 : 1.02 3090.07 12.07 0.00 0.00 41042.42 6505.05 54370.61 00:18:23.479 =================================================================================================================== 00:18:23.479 Total : 3090.07 12.07 0.00 0.00 41042.42 6505.05 54370.61 00:18:23.479 0 00:18:23.479 20:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:23.479 20:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.479 20:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.738 20:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.738 20:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:23.738 "subsystems": [ 00:18:23.738 { 00:18:23.738 "subsystem": "keyring", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "keyring_file_add_key", 00:18:23.738 "params": { 00:18:23.738 "name": "key0", 00:18:23.738 "path": "/tmp/tmp.rGnbouWqRR" 00:18:23.738 } 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "iobuf", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "iobuf_set_options", 00:18:23.738 "params": { 00:18:23.738 "small_pool_count": 8192, 00:18:23.738 "large_pool_count": 1024, 00:18:23.738 "small_bufsize": 8192, 00:18:23.738 "large_bufsize": 135168 00:18:23.738 } 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "sock", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "sock_set_default_impl", 00:18:23.738 "params": { 00:18:23.738 "impl_name": "posix" 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "sock_impl_set_options", 00:18:23.738 "params": { 00:18:23.738 "impl_name": "ssl", 00:18:23.738 "recv_buf_size": 4096, 00:18:23.738 "send_buf_size": 4096, 00:18:23.738 "enable_recv_pipe": true, 00:18:23.738 "enable_quickack": false, 00:18:23.738 "enable_placement_id": 0, 00:18:23.738 "enable_zerocopy_send_server": true, 00:18:23.738 "enable_zerocopy_send_client": false, 00:18:23.738 "zerocopy_threshold": 0, 00:18:23.738 "tls_version": 0, 00:18:23.738 "enable_ktls": false 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "sock_impl_set_options", 00:18:23.738 "params": { 00:18:23.738 "impl_name": "posix", 00:18:23.738 "recv_buf_size": 2097152, 00:18:23.738 "send_buf_size": 2097152, 00:18:23.738 "enable_recv_pipe": true, 00:18:23.738 "enable_quickack": false, 00:18:23.738 "enable_placement_id": 0, 00:18:23.738 "enable_zerocopy_send_server": true, 00:18:23.738 "enable_zerocopy_send_client": false, 00:18:23.738 "zerocopy_threshold": 0, 00:18:23.738 "tls_version": 0, 00:18:23.738 "enable_ktls": false 00:18:23.738 } 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "vmd", 00:18:23.738 "config": [] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "accel", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "accel_set_options", 00:18:23.738 "params": { 00:18:23.738 "small_cache_size": 128, 00:18:23.738 "large_cache_size": 16, 00:18:23.738 "task_count": 2048, 00:18:23.738 "sequence_count": 2048, 00:18:23.738 "buf_count": 2048 00:18:23.738 } 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "bdev", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "bdev_set_options", 00:18:23.738 "params": { 00:18:23.738 "bdev_io_pool_size": 65535, 00:18:23.738 "bdev_io_cache_size": 256, 00:18:23.738 "bdev_auto_examine": true, 00:18:23.738 "iobuf_small_cache_size": 128, 00:18:23.738 "iobuf_large_cache_size": 16 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "bdev_raid_set_options", 00:18:23.738 "params": { 00:18:23.738 "process_window_size_kb": 1024 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "bdev_iscsi_set_options", 00:18:23.738 "params": { 00:18:23.738 "timeout_sec": 30 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "bdev_nvme_set_options", 00:18:23.738 "params": { 00:18:23.738 "action_on_timeout": "none", 00:18:23.738 "timeout_us": 0, 00:18:23.738 "timeout_admin_us": 0, 00:18:23.738 "keep_alive_timeout_ms": 10000, 00:18:23.738 "arbitration_burst": 0, 00:18:23.738 "low_priority_weight": 0, 00:18:23.738 "medium_priority_weight": 0, 00:18:23.738 "high_priority_weight": 0, 00:18:23.738 "nvme_adminq_poll_period_us": 10000, 00:18:23.738 "nvme_ioq_poll_period_us": 0, 00:18:23.738 "io_queue_requests": 0, 00:18:23.738 "delay_cmd_submit": true, 00:18:23.738 "transport_retry_count": 4, 00:18:23.738 "bdev_retry_count": 3, 00:18:23.738 "transport_ack_timeout": 0, 00:18:23.738 "ctrlr_loss_timeout_sec": 0, 00:18:23.738 "reconnect_delay_sec": 0, 00:18:23.738 "fast_io_fail_timeout_sec": 0, 00:18:23.738 "disable_auto_failback": false, 00:18:23.738 "generate_uuids": false, 00:18:23.738 "transport_tos": 0, 00:18:23.738 "nvme_error_stat": false, 00:18:23.738 "rdma_srq_size": 0, 00:18:23.738 "io_path_stat": false, 00:18:23.738 "allow_accel_sequence": false, 00:18:23.738 "rdma_max_cq_size": 0, 00:18:23.738 "rdma_cm_event_timeout_ms": 0, 00:18:23.738 "dhchap_digests": [ 00:18:23.738 "sha256", 00:18:23.738 "sha384", 00:18:23.738 "sha512" 00:18:23.738 ], 00:18:23.738 "dhchap_dhgroups": [ 00:18:23.738 "null", 00:18:23.738 "ffdhe2048", 00:18:23.738 "ffdhe3072", 00:18:23.738 "ffdhe4096", 00:18:23.738 "ffdhe6144", 00:18:23.738 "ffdhe8192" 00:18:23.738 ] 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "bdev_nvme_set_hotplug", 00:18:23.738 "params": { 00:18:23.738 "period_us": 100000, 00:18:23.738 "enable": false 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "bdev_malloc_create", 00:18:23.738 "params": { 00:18:23.738 "name": "malloc0", 00:18:23.738 "num_blocks": 8192, 00:18:23.738 "block_size": 4096, 00:18:23.738 "physical_block_size": 4096, 00:18:23.738 "uuid": "b19b50ec-edda-4b01-abd6-c66c9a7d099d", 00:18:23.738 "optimal_io_boundary": 0 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "bdev_wait_for_examine" 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "nbd", 00:18:23.738 "config": [] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "scheduler", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "framework_set_scheduler", 00:18:23.738 "params": { 00:18:23.738 "name": "static" 00:18:23.738 } 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "subsystem": "nvmf", 00:18:23.738 "config": [ 00:18:23.738 { 00:18:23.738 "method": "nvmf_set_config", 00:18:23.738 "params": { 00:18:23.738 "discovery_filter": "match_any", 00:18:23.738 "admin_cmd_passthru": { 00:18:23.738 "identify_ctrlr": false 00:18:23.738 } 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "nvmf_set_max_subsystems", 00:18:23.738 "params": { 00:18:23.738 "max_subsystems": 1024 00:18:23.738 } 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "method": "nvmf_set_crdt", 00:18:23.738 "params": { 00:18:23.738 "crdt1": 0, 00:18:23.738 "crdt2": 0, 00:18:23.738 "crdt3": 0 00:18:23.739 } 00:18:23.739 }, 00:18:23.739 { 00:18:23.739 "method": "nvmf_create_transport", 00:18:23.739 "params": { 00:18:23.739 "trtype": "TCP", 00:18:23.739 "max_queue_depth": 128, 00:18:23.739 "max_io_qpairs_per_ctrlr": 127, 00:18:23.739 "in_capsule_data_size": 4096, 00:18:23.739 "max_io_size": 131072, 00:18:23.739 "io_unit_size": 131072, 00:18:23.739 "max_aq_depth": 128, 00:18:23.739 "num_shared_buffers": 511, 00:18:23.739 "buf_cache_size": 4294967295, 00:18:23.739 "dif_insert_or_strip": false, 00:18:23.739 "zcopy": false, 00:18:23.739 "c2h_success": false, 00:18:23.739 "sock_priority": 0, 00:18:23.739 "abort_timeout_sec": 1, 00:18:23.739 "ack_timeout": 0, 00:18:23.739 "data_wr_pool_size": 0 00:18:23.739 } 00:18:23.739 }, 00:18:23.739 { 00:18:23.739 "method": "nvmf_create_subsystem", 00:18:23.739 "params": { 00:18:23.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.739 "allow_any_host": false, 00:18:23.739 "serial_number": "00000000000000000000", 00:18:23.739 "model_number": "SPDK bdev Controller", 00:18:23.739 "max_namespaces": 32, 00:18:23.739 "min_cntlid": 1, 00:18:23.739 "max_cntlid": 65519, 00:18:23.739 "ana_reporting": false 00:18:23.739 } 00:18:23.739 }, 00:18:23.739 { 00:18:23.739 "method": "nvmf_subsystem_add_host", 00:18:23.739 "params": { 00:18:23.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.739 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.739 "psk": "key0" 00:18:23.739 } 00:18:23.739 }, 00:18:23.739 { 00:18:23.739 "method": "nvmf_subsystem_add_ns", 00:18:23.739 "params": { 00:18:23.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.739 "namespace": { 00:18:23.739 "nsid": 1, 00:18:23.739 "bdev_name": "malloc0", 00:18:23.739 "nguid": "B19B50ECEDDA4B01ABD6C66C9A7D099D", 00:18:23.739 "uuid": "b19b50ec-edda-4b01-abd6-c66c9a7d099d", 00:18:23.739 "no_auto_visible": false 00:18:23.739 } 00:18:23.739 } 00:18:23.739 }, 00:18:23.739 { 00:18:23.739 "method": "nvmf_subsystem_add_listener", 00:18:23.739 "params": { 00:18:23.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.739 "listen_address": { 00:18:23.739 "trtype": "TCP", 00:18:23.739 "adrfam": "IPv4", 00:18:23.739 "traddr": "10.0.0.2", 00:18:23.739 "trsvcid": "4420" 00:18:23.739 }, 00:18:23.739 "secure_channel": true 00:18:23.739 } 00:18:23.739 } 00:18:23.739 ] 00:18:23.739 } 00:18:23.739 ] 00:18:23.739 }' 00:18:23.739 20:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:23.997 20:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:23.997 "subsystems": [ 00:18:23.997 { 00:18:23.997 "subsystem": "keyring", 00:18:23.997 "config": [ 00:18:23.997 { 00:18:23.997 "method": "keyring_file_add_key", 00:18:23.997 "params": { 00:18:23.997 "name": "key0", 00:18:23.997 "path": "/tmp/tmp.rGnbouWqRR" 00:18:23.997 } 00:18:23.997 } 00:18:23.997 ] 00:18:23.997 }, 00:18:23.997 { 00:18:23.997 "subsystem": "iobuf", 00:18:23.997 "config": [ 00:18:23.997 { 00:18:23.997 "method": "iobuf_set_options", 00:18:23.997 "params": { 00:18:23.997 "small_pool_count": 8192, 00:18:23.997 "large_pool_count": 1024, 00:18:23.997 "small_bufsize": 8192, 00:18:23.997 "large_bufsize": 135168 00:18:23.997 } 00:18:23.997 } 00:18:23.997 ] 00:18:23.997 }, 00:18:23.997 { 00:18:23.997 "subsystem": "sock", 00:18:23.997 "config": [ 00:18:23.997 { 00:18:23.997 "method": "sock_set_default_impl", 00:18:23.997 "params": { 00:18:23.997 "impl_name": "posix" 00:18:23.997 } 00:18:23.997 }, 00:18:23.997 { 00:18:23.997 "method": "sock_impl_set_options", 00:18:23.997 "params": { 00:18:23.997 "impl_name": "ssl", 00:18:23.997 "recv_buf_size": 4096, 00:18:23.997 "send_buf_size": 4096, 00:18:23.997 "enable_recv_pipe": true, 00:18:23.997 "enable_quickack": false, 00:18:23.997 "enable_placement_id": 0, 00:18:23.997 "enable_zerocopy_send_server": true, 00:18:23.997 "enable_zerocopy_send_client": false, 00:18:23.997 "zerocopy_threshold": 0, 00:18:23.997 "tls_version": 0, 00:18:23.997 "enable_ktls": false 00:18:23.997 } 00:18:23.997 }, 00:18:23.997 { 00:18:23.997 "method": "sock_impl_set_options", 00:18:23.997 "params": { 00:18:23.997 "impl_name": "posix", 00:18:23.997 "recv_buf_size": 2097152, 00:18:23.997 "send_buf_size": 2097152, 00:18:23.997 "enable_recv_pipe": true, 00:18:23.997 "enable_quickack": false, 00:18:23.997 "enable_placement_id": 0, 00:18:23.997 "enable_zerocopy_send_server": true, 00:18:23.997 "enable_zerocopy_send_client": false, 00:18:23.997 "zerocopy_threshold": 0, 00:18:23.997 "tls_version": 0, 00:18:23.997 "enable_ktls": false 00:18:23.997 } 00:18:23.997 } 00:18:23.997 ] 00:18:23.997 }, 00:18:23.997 { 00:18:23.997 "subsystem": "vmd", 00:18:23.998 "config": [] 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "subsystem": "accel", 00:18:23.998 "config": [ 00:18:23.998 { 00:18:23.998 "method": "accel_set_options", 00:18:23.998 "params": { 00:18:23.998 "small_cache_size": 128, 00:18:23.998 "large_cache_size": 16, 00:18:23.998 "task_count": 2048, 00:18:23.998 "sequence_count": 2048, 00:18:23.998 "buf_count": 2048 00:18:23.998 } 00:18:23.998 } 00:18:23.998 ] 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "subsystem": "bdev", 00:18:23.998 "config": [ 00:18:23.998 { 00:18:23.998 "method": "bdev_set_options", 00:18:23.998 "params": { 00:18:23.998 "bdev_io_pool_size": 65535, 00:18:23.998 "bdev_io_cache_size": 256, 00:18:23.998 "bdev_auto_examine": true, 00:18:23.998 "iobuf_small_cache_size": 128, 00:18:23.998 "iobuf_large_cache_size": 16 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_raid_set_options", 00:18:23.998 "params": { 00:18:23.998 "process_window_size_kb": 1024 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_iscsi_set_options", 00:18:23.998 "params": { 00:18:23.998 "timeout_sec": 30 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_nvme_set_options", 00:18:23.998 "params": { 00:18:23.998 "action_on_timeout": "none", 00:18:23.998 "timeout_us": 0, 00:18:23.998 "timeout_admin_us": 0, 00:18:23.998 "keep_alive_timeout_ms": 10000, 00:18:23.998 "arbitration_burst": 0, 00:18:23.998 "low_priority_weight": 0, 00:18:23.998 "medium_priority_weight": 0, 00:18:23.998 "high_priority_weight": 0, 00:18:23.998 "nvme_adminq_poll_period_us": 10000, 00:18:23.998 "nvme_ioq_poll_period_us": 0, 00:18:23.998 "io_queue_requests": 512, 00:18:23.998 "delay_cmd_submit": true, 00:18:23.998 "transport_retry_count": 4, 00:18:23.998 "bdev_retry_count": 3, 00:18:23.998 "transport_ack_timeout": 0, 00:18:23.998 "ctrlr_loss_timeout_sec": 0, 00:18:23.998 "reconnect_delay_sec": 0, 00:18:23.998 "fast_io_fail_timeout_sec": 0, 00:18:23.998 "disable_auto_failback": false, 00:18:23.998 "generate_uuids": false, 00:18:23.998 "transport_tos": 0, 00:18:23.998 "nvme_error_stat": false, 00:18:23.998 "rdma_srq_size": 0, 00:18:23.998 "io_path_stat": false, 00:18:23.998 "allow_accel_sequence": false, 00:18:23.998 "rdma_max_cq_size": 0, 00:18:23.998 "rdma_cm_event_timeout_ms": 0, 00:18:23.998 "dhchap_digests": [ 00:18:23.998 "sha256", 00:18:23.998 "sha384", 00:18:23.998 "sha512" 00:18:23.998 ], 00:18:23.998 "dhchap_dhgroups": [ 00:18:23.998 "null", 00:18:23.998 "ffdhe2048", 00:18:23.998 "ffdhe3072", 00:18:23.998 "ffdhe4096", 00:18:23.998 "ffdhe6144", 00:18:23.998 "ffdhe8192" 00:18:23.998 ] 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_nvme_attach_controller", 00:18:23.998 "params": { 00:18:23.998 "name": "nvme0", 00:18:23.998 "trtype": "TCP", 00:18:23.998 "adrfam": "IPv4", 00:18:23.998 "traddr": "10.0.0.2", 00:18:23.998 "trsvcid": "4420", 00:18:23.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.998 "prchk_reftag": false, 00:18:23.998 "prchk_guard": false, 00:18:23.998 "ctrlr_loss_timeout_sec": 0, 00:18:23.998 "reconnect_delay_sec": 0, 00:18:23.998 "fast_io_fail_timeout_sec": 0, 00:18:23.998 "psk": "key0", 00:18:23.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.998 "hdgst": false, 00:18:23.998 "ddgst": false 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_nvme_set_hotplug", 00:18:23.998 "params": { 00:18:23.998 "period_us": 100000, 00:18:23.998 "enable": false 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_enable_histogram", 00:18:23.998 "params": { 00:18:23.998 "name": "nvme0n1", 00:18:23.998 "enable": true 00:18:23.998 } 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "method": "bdev_wait_for_examine" 00:18:23.998 } 00:18:23.998 ] 00:18:23.998 }, 00:18:23.998 { 00:18:23.998 "subsystem": "nbd", 00:18:23.998 "config": [] 00:18:23.998 } 00:18:23.998 ] 00:18:23.998 }' 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 241898 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 241898 ']' 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 241898 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 241898 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 241898' 00:18:23.998 killing process with pid 241898 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 241898 00:18:23.998 Received shutdown signal, test time was about 1.000000 seconds 00:18:23.998 00:18:23.998 Latency(us) 00:18:23.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.998 =================================================================================================================== 00:18:23.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.998 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 241898 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 241871 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 241871 ']' 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 241871 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 241871 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 241871' 00:18:24.256 killing process with pid 241871 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 241871 00:18:24.256 [2024-05-16 20:17:11.326033] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:24.256 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 241871 00:18:24.515 20:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:24.515 20:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:24.515 "subsystems": [ 00:18:24.515 { 00:18:24.515 "subsystem": "keyring", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "keyring_file_add_key", 00:18:24.515 "params": { 00:18:24.515 "name": "key0", 00:18:24.515 "path": "/tmp/tmp.rGnbouWqRR" 00:18:24.515 } 00:18:24.515 } 00:18:24.515 ] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "iobuf", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "iobuf_set_options", 00:18:24.515 "params": { 00:18:24.515 "small_pool_count": 8192, 00:18:24.515 "large_pool_count": 1024, 00:18:24.515 "small_bufsize": 8192, 00:18:24.515 "large_bufsize": 135168 00:18:24.515 } 00:18:24.515 } 00:18:24.515 ] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "sock", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "sock_set_default_impl", 00:18:24.515 "params": { 00:18:24.515 "impl_name": "posix" 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "sock_impl_set_options", 00:18:24.515 "params": { 00:18:24.515 "impl_name": "ssl", 00:18:24.515 "recv_buf_size": 4096, 00:18:24.515 "send_buf_size": 4096, 00:18:24.515 "enable_recv_pipe": true, 00:18:24.515 "enable_quickack": false, 00:18:24.515 "enable_placement_id": 0, 00:18:24.515 "enable_zerocopy_send_server": true, 00:18:24.515 "enable_zerocopy_send_client": false, 00:18:24.515 "zerocopy_threshold": 0, 00:18:24.515 "tls_version": 0, 00:18:24.515 "enable_ktls": false 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "sock_impl_set_options", 00:18:24.515 "params": { 00:18:24.515 "impl_name": "posix", 00:18:24.515 "recv_buf_size": 2097152, 00:18:24.515 "send_buf_size": 2097152, 00:18:24.515 "enable_recv_pipe": true, 00:18:24.515 "enable_quickack": false, 00:18:24.515 "enable_placement_id": 0, 00:18:24.515 "enable_zerocopy_send_server": true, 00:18:24.515 "enable_zerocopy_send_client": false, 00:18:24.515 "zerocopy_threshold": 0, 00:18:24.515 "tls_version": 0, 00:18:24.515 "enable_ktls": false 00:18:24.515 } 00:18:24.515 } 00:18:24.515 ] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "vmd", 00:18:24.515 "config": [] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "accel", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "accel_set_options", 00:18:24.515 "params": { 00:18:24.515 "small_cache_size": 128, 00:18:24.515 "large_cache_size": 16, 00:18:24.515 "task_count": 2048, 00:18:24.515 "sequence_count": 2048, 00:18:24.515 "buf_count": 2048 00:18:24.515 } 00:18:24.515 } 00:18:24.515 ] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "bdev", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "bdev_set_options", 00:18:24.515 "params": { 00:18:24.515 "bdev_io_pool_size": 65535, 00:18:24.515 "bdev_io_cache_size": 256, 00:18:24.515 "bdev_auto_examine": true, 00:18:24.515 "iobuf_small_cache_size": 128, 00:18:24.515 "iobuf_large_cache_size": 16 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "bdev_raid_set_options", 00:18:24.515 "params": { 00:18:24.515 "process_window_size_kb": 1024 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "bdev_iscsi_set_options", 00:18:24.515 "params": { 00:18:24.515 "timeout_sec": 30 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "bdev_nvme_set_options", 00:18:24.515 "params": { 00:18:24.515 "action_on_timeout": "none", 00:18:24.515 "timeout_us": 0, 00:18:24.515 "timeout_admin_us": 0, 00:18:24.515 "keep_alive_timeout_ms": 10000, 00:18:24.515 "arbitration_burst": 0, 00:18:24.515 "low_priority_weight": 0, 00:18:24.515 "medium_priority_weight": 0, 00:18:24.515 "high_priority_weight": 0, 00:18:24.515 "nvme_adminq_poll_period_us": 10000, 00:18:24.515 "nvme_ioq_poll_period_us": 0, 00:18:24.515 "io_queue_requests": 0, 00:18:24.515 "delay_cmd_submit": true, 00:18:24.515 "transport_retry_count": 4, 00:18:24.515 "bdev_retry_count": 3, 00:18:24.515 "transport_ack_timeout": 0, 00:18:24.515 "ctrlr_loss_timeout_sec": 0, 00:18:24.515 "reconnect_delay_sec": 0, 00:18:24.515 "fast_io_fail_timeout_sec": 0, 00:18:24.515 "disable_auto_failback": false, 00:18:24.515 "generate_uuids": false, 00:18:24.515 "transport_tos": 0, 00:18:24.515 "nvme_error_stat": false, 00:18:24.515 "rdma_srq_size": 0, 00:18:24.515 "io_path_stat": false, 00:18:24.515 "allow_accel_sequence": false, 00:18:24.515 "rdma_max_cq_size": 0, 00:18:24.515 "rdma_cm_event_timeout_ms": 0, 00:18:24.515 "dhchap_digests": [ 00:18:24.515 "sha256", 00:18:24.515 "sha384", 00:18:24.515 "sha512" 00:18:24.515 ], 00:18:24.515 "dhchap_dhgroups": [ 00:18:24.515 "null", 00:18:24.515 "ffdhe2048", 00:18:24.515 "ffdhe3072", 00:18:24.515 "ffdhe4096", 00:18:24.515 "ffdhe6144", 00:18:24.515 "ffdhe8192" 00:18:24.515 ] 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "bdev_nvme_set_hotplug", 00:18:24.515 "params": { 00:18:24.515 "period_us": 100000, 00:18:24.515 "enable": false 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "bdev_malloc_create", 00:18:24.515 "params": { 00:18:24.515 "name": "malloc0", 00:18:24.515 "num_blocks": 8192, 00:18:24.515 "block_size": 4096, 00:18:24.515 "physical_block_size": 4096, 00:18:24.515 "uuid": "b19b50ec-edda-4b01-abd6-c66c9a7d099d", 00:18:24.515 "optimal_io_boundary": 0 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "bdev_wait_for_examine" 00:18:24.515 } 00:18:24.515 ] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "nbd", 00:18:24.515 "config": [] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "scheduler", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "framework_set_scheduler", 00:18:24.515 "params": { 00:18:24.515 "name": "static" 00:18:24.515 } 00:18:24.515 } 00:18:24.515 ] 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "subsystem": "nvmf", 00:18:24.515 "config": [ 00:18:24.515 { 00:18:24.515 "method": "nvmf_set_config", 00:18:24.515 "params": { 00:18:24.515 "discovery_filter": "match_any", 00:18:24.515 "admin_cmd_passthru": { 00:18:24.515 "identify_ctrlr": false 00:18:24.515 } 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.515 "method": "nvmf_set_max_subsystems", 00:18:24.515 "params": { 00:18:24.515 "max_subsystems": 1024 00:18:24.515 } 00:18:24.515 }, 00:18:24.515 { 00:18:24.516 "method": "nvmf_set_crdt", 00:18:24.516 "params": { 00:18:24.516 "crdt1": 0, 00:18:24.516 "crdt2": 0, 00:18:24.516 "crdt3": 0 00:18:24.516 } 00:18:24.516 }, 00:18:24.516 { 00:18:24.516 "method": "nvmf_create_transport", 00:18:24.516 "params": { 00:18:24.516 "trtype": "TCP", 00:18:24.516 "max_queue_depth": 128, 00:18:24.516 "max_io_qpairs_per_ctrlr": 127, 00:18:24.516 "in_capsule_data_size": 4096, 00:18:24.516 "max_io_size": 131072, 00:18:24.516 "io_unit_size": 131072, 00:18:24.516 "max_aq_depth": 128, 00:18:24.516 "num_shared_buffers": 511, 00:18:24.516 "buf_cache_size": 4294967295, 00:18:24.516 "dif_insert_or_strip": false, 00:18:24.516 "zcopy": false, 00:18:24.516 "c2h_success": false, 00:18:24.516 "sock_priority": 0, 00:18:24.516 "abort_timeout_sec": 1, 00:18:24.516 "ack_timeout": 0, 00:18:24.516 "data_wr_pool_size": 0 00:18:24.516 } 00:18:24.516 }, 00:18:24.516 { 00:18:24.516 "method": "nvmf_create_subsystem", 00:18:24.516 "params": { 00:18:24.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.516 "allow_any_host": false, 00:18:24.516 "serial_number": "00000000000000000000", 00:18:24.516 "model_number": "SPDK bdev Controller", 00:18:24.516 "max_namespaces": 32, 00:18:24.516 "min_cntlid": 1, 00:18:24.516 "max_cntlid": 65519, 00:18:24.516 "ana_reporting": false 00:18:24.516 } 00:18:24.516 }, 00:18:24.516 { 00:18:24.516 "method": "nvmf_subsystem_add_host", 00:18:24.516 "params": { 00:18:24.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.516 "host": "nqn.2016-06.io.spdk:host1", 00:18:24.516 "psk": "key0" 00:18:24.516 } 00:18:24.516 }, 00:18:24.516 { 00:18:24.516 "method": "nvmf_subsystem_add_ns", 00:18:24.516 "params": { 00:18:24.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.516 "namespace": { 00:18:24.516 "nsid": 1, 00:18:24.516 "bdev_name": "malloc0", 00:18:24.516 "nguid": "B19B50ECEDDA4B01ABD6C66C9A7D099D", 00:18:24.516 "uuid": "b19b50ec-edda-4b01-abd6-c66c9a7d099d", 00:18:24.516 "no_auto_visible": false 00:18:24.516 } 00:18:24.516 } 00:18:24.516 }, 00:18:24.516 { 00:18:24.516 "method": "nvmf_subsystem_add_listener", 00:18:24.516 "params": { 00:18:24.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.516 "listen_address": { 00:18:24.516 "trtype": "TCP", 00:18:24.516 "adrfam": "IPv4", 00:18:24.516 "traddr": "10.0.0.2", 00:18:24.516 "trsvcid": "4420" 00:18:24.516 }, 00:18:24.516 "secure_channel": true 00:18:24.516 } 00:18:24.516 } 00:18:24.516 ] 00:18:24.516 } 00:18:24.516 ] 00:18:24.516 }' 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=242307 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 242307 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 242307 ']' 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:24.516 20:17:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.516 [2024-05-16 20:17:11.646286] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:24.516 [2024-05-16 20:17:11.646362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.774 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.774 [2024-05-16 20:17:11.710147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.774 [2024-05-16 20:17:11.819461] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.774 [2024-05-16 20:17:11.819528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.774 [2024-05-16 20:17:11.819557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.774 [2024-05-16 20:17:11.819568] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.774 [2024-05-16 20:17:11.819578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.774 [2024-05-16 20:17:11.819662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.032 [2024-05-16 20:17:12.068059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.032 [2024-05-16 20:17:12.100016] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:25.032 [2024-05-16 20:17:12.100085] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.032 [2024-05-16 20:17:12.108055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=242455 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 242455 /var/tmp/bdevperf.sock 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 242455 ']' 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.600 20:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:25.600 "subsystems": [ 00:18:25.600 { 00:18:25.600 "subsystem": "keyring", 00:18:25.600 "config": [ 00:18:25.600 { 00:18:25.600 "method": "keyring_file_add_key", 00:18:25.600 "params": { 00:18:25.600 "name": "key0", 00:18:25.600 "path": "/tmp/tmp.rGnbouWqRR" 00:18:25.600 } 00:18:25.600 } 00:18:25.600 ] 00:18:25.600 }, 00:18:25.600 { 00:18:25.600 "subsystem": "iobuf", 00:18:25.600 "config": [ 00:18:25.600 { 00:18:25.600 "method": "iobuf_set_options", 00:18:25.600 "params": { 00:18:25.600 "small_pool_count": 8192, 00:18:25.600 "large_pool_count": 1024, 00:18:25.600 "small_bufsize": 8192, 00:18:25.600 "large_bufsize": 135168 00:18:25.600 } 00:18:25.600 } 00:18:25.600 ] 00:18:25.600 }, 00:18:25.600 { 00:18:25.600 "subsystem": "sock", 00:18:25.600 "config": [ 00:18:25.600 { 00:18:25.600 "method": "sock_set_default_impl", 00:18:25.600 "params": { 00:18:25.600 "impl_name": "posix" 00:18:25.600 } 00:18:25.600 }, 00:18:25.600 { 00:18:25.600 "method": "sock_impl_set_options", 00:18:25.600 "params": { 00:18:25.600 "impl_name": "ssl", 00:18:25.600 "recv_buf_size": 4096, 00:18:25.600 "send_buf_size": 4096, 00:18:25.600 "enable_recv_pipe": true, 00:18:25.600 "enable_quickack": false, 00:18:25.600 "enable_placement_id": 0, 00:18:25.600 "enable_zerocopy_send_server": true, 00:18:25.600 "enable_zerocopy_send_client": false, 00:18:25.600 "zerocopy_threshold": 0, 00:18:25.600 "tls_version": 0, 00:18:25.600 "enable_ktls": false 00:18:25.600 } 00:18:25.600 }, 00:18:25.600 { 00:18:25.600 "method": "sock_impl_set_options", 00:18:25.600 "params": { 00:18:25.600 "impl_name": "posix", 00:18:25.600 "recv_buf_size": 2097152, 00:18:25.601 "send_buf_size": 2097152, 00:18:25.601 "enable_recv_pipe": true, 00:18:25.601 "enable_quickack": false, 00:18:25.601 "enable_placement_id": 0, 00:18:25.601 "enable_zerocopy_send_server": true, 00:18:25.601 "enable_zerocopy_send_client": false, 00:18:25.601 "zerocopy_threshold": 0, 00:18:25.601 "tls_version": 0, 00:18:25.601 "enable_ktls": false 00:18:25.601 } 00:18:25.601 } 00:18:25.601 ] 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "subsystem": "vmd", 00:18:25.601 "config": [] 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "subsystem": "accel", 00:18:25.601 "config": [ 00:18:25.601 { 00:18:25.601 "method": "accel_set_options", 00:18:25.601 "params": { 00:18:25.601 "small_cache_size": 128, 00:18:25.601 "large_cache_size": 16, 00:18:25.601 "task_count": 2048, 00:18:25.601 "sequence_count": 2048, 00:18:25.601 "buf_count": 2048 00:18:25.601 } 00:18:25.601 } 00:18:25.601 ] 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "subsystem": "bdev", 00:18:25.601 "config": [ 00:18:25.601 { 00:18:25.601 "method": "bdev_set_options", 00:18:25.601 "params": { 00:18:25.601 "bdev_io_pool_size": 65535, 00:18:25.601 "bdev_io_cache_size": 256, 00:18:25.601 "bdev_auto_examine": true, 00:18:25.601 "iobuf_small_cache_size": 128, 00:18:25.601 "iobuf_large_cache_size": 16 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_raid_set_options", 00:18:25.601 "params": { 00:18:25.601 "process_window_size_kb": 1024 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_iscsi_set_options", 00:18:25.601 "params": { 00:18:25.601 "timeout_sec": 30 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_nvme_set_options", 00:18:25.601 "params": { 00:18:25.601 "action_on_timeout": "none", 00:18:25.601 "timeout_us": 0, 00:18:25.601 "timeout_admin_us": 0, 00:18:25.601 "keep_alive_timeout_ms": 10000, 00:18:25.601 "arbitration_burst": 0, 00:18:25.601 "low_priority_weight": 0, 00:18:25.601 "medium_priority_weight": 0, 00:18:25.601 "high_priority_weight": 0, 00:18:25.601 "nvme_adminq_poll_period_us": 10000, 00:18:25.601 "nvme_ioq_poll_period_us": 0, 00:18:25.601 "io_queue_requests": 512, 00:18:25.601 "delay_cmd_submit": true, 00:18:25.601 "transport_retry_count": 4, 00:18:25.601 "bdev_retry_count": 3, 00:18:25.601 "transport_ack_timeout": 0, 00:18:25.601 "ctrlr_loss_timeout_sec": 0, 00:18:25.601 "reconnect_delay_sec": 0, 00:18:25.601 "fast_io_fail_timeout_sec": 0, 00:18:25.601 "disable_auto_failback": false, 00:18:25.601 "generate_uuids": false, 00:18:25.601 "transport_tos": 0, 00:18:25.601 "nvme_error_stat": false, 00:18:25.601 "rdma_srq_size": 0, 00:18:25.601 "io_path_stat": false, 00:18:25.601 "allow_accel_sequence": false, 00:18:25.601 "rdma_max_cq_size": 0, 00:18:25.601 "rdma_cm_event_timeout_ms": 0, 00:18:25.601 "dhchap_digests": [ 00:18:25.601 "sha256", 00:18:25.601 "sha384", 00:18:25.601 "sha512" 00:18:25.601 ], 00:18:25.601 "dhchap_dhgroups": [ 00:18:25.601 "null", 00:18:25.601 "ffdhe2048", 00:18:25.601 "ffdhe3072", 00:18:25.601 "ffdhe4096", 00:18:25.601 "ffdhe6144", 00:18:25.601 "ffdhe8192" 00:18:25.601 ] 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_nvme_attach_controller", 00:18:25.601 "params": { 00:18:25.601 "name": "nvme0", 00:18:25.601 "trtype": "TCP", 00:18:25.601 "adrfam": "IPv4", 00:18:25.601 "traddr": "10.0.0.2", 00:18:25.601 "trsvcid": "4420", 00:18:25.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.601 "prchk_reftag": false, 00:18:25.601 "prchk_guard": false, 00:18:25.601 "ctrlr_loss_timeout_sec": 0, 00:18:25.601 "reconnect_delay_sec": 0, 00:18:25.601 "fast_io_fail_timeout_sec": 0, 00:18:25.601 "psk": "key0", 00:18:25.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.601 "hdgst": false, 00:18:25.601 "ddgst": false 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_nvme_set_hotplug", 00:18:25.601 "params": { 00:18:25.601 "period_us": 100000, 00:18:25.601 "enable": false 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_enable_histogram", 00:18:25.601 "params": { 00:18:25.601 "name": "nvme0n1", 00:18:25.601 "enable": true 00:18:25.601 } 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "method": "bdev_wait_for_examine" 00:18:25.601 } 00:18:25.601 ] 00:18:25.601 }, 00:18:25.601 { 00:18:25.601 "subsystem": "nbd", 00:18:25.601 "config": [] 00:18:25.601 } 00:18:25.601 ] 00:18:25.601 }' 00:18:25.601 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.601 20:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.601 [2024-05-16 20:17:12.663505] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:25.601 [2024-05-16 20:17:12.663578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242455 ] 00:18:25.601 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.601 [2024-05-16 20:17:12.727648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.860 [2024-05-16 20:17:12.846817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.118 [2024-05-16 20:17:13.035399] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.683 20:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.683 20:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:26.683 20:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:26.683 20:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:26.941 20:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.941 20:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.941 Running I/O for 1 seconds... 00:18:27.881 00:18:27.881 Latency(us) 00:18:27.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.881 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:27.881 Verification LBA range: start 0x0 length 0x2000 00:18:27.881 nvme0n1 : 1.03 3128.24 12.22 0.00 0.00 40375.72 9369.22 34952.53 00:18:27.881 =================================================================================================================== 00:18:27.881 Total : 3128.24 12.22 0.00 0.00 40375.72 9369.22 34952.53 00:18:27.881 0 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:27.881 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:28.139 nvmf_trace.0 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 242455 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 242455 ']' 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 242455 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 242455 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 242455' 00:18:28.139 killing process with pid 242455 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 242455 00:18:28.139 Received shutdown signal, test time was about 1.000000 seconds 00:18:28.139 00:18:28.139 Latency(us) 00:18:28.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.139 =================================================================================================================== 00:18:28.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.139 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 242455 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.398 rmmod nvme_tcp 00:18:28.398 rmmod nvme_fabrics 00:18:28.398 rmmod nvme_keyring 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 242307 ']' 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 242307 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 242307 ']' 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 242307 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 242307 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 242307' 00:18:28.398 killing process with pid 242307 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 242307 00:18:28.398 [2024-05-16 20:17:15.486958] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:28.398 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 242307 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.656 20:17:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.191 20:17:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.191 20:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.6IxzXeuE8q /tmp/tmp.HvK9iJkUq0 /tmp/tmp.rGnbouWqRR 00:18:31.191 00:18:31.191 real 1m20.783s 00:18:31.191 user 2m6.428s 00:18:31.191 sys 0m26.615s 00:18:31.191 20:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:31.191 20:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.191 ************************************ 00:18:31.191 END TEST nvmf_tls 00:18:31.191 ************************************ 00:18:31.191 20:17:17 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:31.191 20:17:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:31.191 20:17:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:31.191 20:17:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.191 ************************************ 00:18:31.191 START TEST nvmf_fips 00:18:31.191 ************************************ 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:31.191 * Looking for test storage... 00:18:31.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.191 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:31.192 20:17:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:31.192 Error setting digest 00:18:31.192 0072576C7D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:31.192 0072576C7D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.192 20:17:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:33.190 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.190 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:33.191 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:33.191 Found net devices under 0000:09:00.0: cvl_0_0 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:33.191 Found net devices under 0000:09:00.1: cvl_0_1 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:18:33.191 00:18:33.191 --- 10.0.0.2 ping statistics --- 00:18:33.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.191 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:18:33.191 00:18:33.191 --- 10.0.0.1 ping statistics --- 00:18:33.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.191 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=244696 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 244696 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 244696 ']' 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.191 20:17:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:33.191 [2024-05-16 20:17:20.332206] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:33.191 [2024-05-16 20:17:20.332314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.449 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.449 [2024-05-16 20:17:20.402170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.449 [2024-05-16 20:17:20.521689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.449 [2024-05-16 20:17:20.521754] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.449 [2024-05-16 20:17:20.521770] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.449 [2024-05-16 20:17:20.521784] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.449 [2024-05-16 20:17:20.521801] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.449 [2024-05-16 20:17:20.521831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:34.383 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.642 [2024-05-16 20:17:21.559869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.642 [2024-05-16 20:17:21.575779] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:34.642 [2024-05-16 20:17:21.575868] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.642 [2024-05-16 20:17:21.576078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.642 [2024-05-16 20:17:21.607515] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:34.642 malloc0 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=244936 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 244936 /var/tmp/bdevperf.sock 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 244936 ']' 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.642 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:34.642 [2024-05-16 20:17:21.696751] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:34.642 [2024-05-16 20:17:21.696844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244936 ] 00:18:34.642 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.642 [2024-05-16 20:17:21.755459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.900 [2024-05-16 20:17:21.864585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.900 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:34.900 20:17:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:34.900 20:17:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.158 [2024-05-16 20:17:22.171669] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.158 [2024-05-16 20:17:22.171797] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:35.158 TLSTESTn1 00:18:35.158 20:17:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.416 Running I/O for 10 seconds... 00:18:45.386 00:18:45.386 Latency(us) 00:18:45.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.386 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.386 Verification LBA range: start 0x0 length 0x2000 00:18:45.386 TLSTESTn1 : 10.02 3418.95 13.36 0.00 0.00 37377.84 6505.05 44273.21 00:18:45.386 =================================================================================================================== 00:18:45.386 Total : 3418.95 13.36 0.00 0.00 37377.84 6505.05 44273.21 00:18:45.386 0 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:45.386 nvmf_trace.0 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 244936 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 244936 ']' 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 244936 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 244936 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 244936' 00:18:45.386 killing process with pid 244936 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 244936 00:18:45.386 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.386 00:18:45.386 Latency(us) 00:18:45.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.386 =================================================================================================================== 00:18:45.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.386 [2024-05-16 20:17:32.517613] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:45.386 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 244936 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.644 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.644 rmmod nvme_tcp 00:18:45.902 rmmod nvme_fabrics 00:18:45.902 rmmod nvme_keyring 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 244696 ']' 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 244696 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 244696 ']' 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 244696 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 244696 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 244696' 00:18:45.902 killing process with pid 244696 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 244696 00:18:45.902 [2024-05-16 20:17:32.876693] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:45.902 [2024-05-16 20:17:32.876752] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:45.902 20:17:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 244696 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.160 20:17:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.061 20:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.061 20:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.061 00:18:48.061 real 0m17.326s 00:18:48.061 user 0m22.782s 00:18:48.061 sys 0m5.099s 00:18:48.061 20:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.061 20:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:48.061 ************************************ 00:18:48.061 END TEST nvmf_fips 00:18:48.061 ************************************ 00:18:48.319 20:17:35 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:48.319 20:17:35 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:48.319 20:17:35 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:48.319 20:17:35 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:48.319 20:17:35 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.319 20:17:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.287 20:17:37 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:50.288 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:50.288 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:50.288 Found net devices under 0000:09:00.0: cvl_0_0 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:50.288 Found net devices under 0000:09:00.1: cvl_0_1 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:50.288 20:17:37 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:50.288 20:17:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:50.288 20:17:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:50.288 20:17:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.288 ************************************ 00:18:50.288 START TEST nvmf_perf_adq 00:18:50.288 ************************************ 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:50.288 * Looking for test storage... 00:18:50.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.288 20:17:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:52.298 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:52.298 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:52.298 Found net devices under 0000:09:00.0: cvl_0_0 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:52.298 Found net devices under 0000:09:00.1: cvl_0_1 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:52.298 20:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:52.865 20:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:55.392 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.656 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:00.657 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:00.657 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:00.657 Found net devices under 0000:09:00.0: cvl_0_0 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:00.657 Found net devices under 0000:09:00.1: cvl_0_1 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:00.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:19:00.657 00:19:00.657 --- 10.0.0.2 ping statistics --- 00:19:00.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.657 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:00.657 00:19:00.657 --- 10.0.0.1 ping statistics --- 00:19:00.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.657 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=250735 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 250735 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 250735 ']' 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:00.657 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.657 [2024-05-16 20:17:47.707632] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:00.657 [2024-05-16 20:17:47.707715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.657 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.657 [2024-05-16 20:17:47.774683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.915 [2024-05-16 20:17:47.887915] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.916 [2024-05-16 20:17:47.887968] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.916 [2024-05-16 20:17:47.887990] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.916 [2024-05-16 20:17:47.888001] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.916 [2024-05-16 20:17:47.888011] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.916 [2024-05-16 20:17:47.888061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.916 [2024-05-16 20:17:47.888128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.916 [2024-05-16 20:17:47.888169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.916 [2024-05-16 20:17:47.888172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.916 20:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.916 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.916 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:00.916 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.916 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.174 [2024-05-16 20:17:48.116475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.174 Malloc1 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.174 [2024-05-16 20:17:48.168779] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:01.174 [2024-05-16 20:17:48.169104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=250887 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:01.174 20:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:01.174 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:03.076 "tick_rate": 2700000000, 00:19:03.076 "poll_groups": [ 00:19:03.076 { 00:19:03.076 "name": "nvmf_tgt_poll_group_000", 00:19:03.076 "admin_qpairs": 1, 00:19:03.076 "io_qpairs": 1, 00:19:03.076 "current_admin_qpairs": 1, 00:19:03.076 "current_io_qpairs": 1, 00:19:03.076 "pending_bdev_io": 0, 00:19:03.076 "completed_nvme_io": 19910, 00:19:03.076 "transports": [ 00:19:03.076 { 00:19:03.076 "trtype": "TCP" 00:19:03.076 } 00:19:03.076 ] 00:19:03.076 }, 00:19:03.076 { 00:19:03.076 "name": "nvmf_tgt_poll_group_001", 00:19:03.076 "admin_qpairs": 0, 00:19:03.076 "io_qpairs": 1, 00:19:03.076 "current_admin_qpairs": 0, 00:19:03.076 "current_io_qpairs": 1, 00:19:03.076 "pending_bdev_io": 0, 00:19:03.076 "completed_nvme_io": 19904, 00:19:03.076 "transports": [ 00:19:03.076 { 00:19:03.076 "trtype": "TCP" 00:19:03.076 } 00:19:03.076 ] 00:19:03.076 }, 00:19:03.076 { 00:19:03.076 "name": "nvmf_tgt_poll_group_002", 00:19:03.076 "admin_qpairs": 0, 00:19:03.076 "io_qpairs": 1, 00:19:03.076 "current_admin_qpairs": 0, 00:19:03.076 "current_io_qpairs": 1, 00:19:03.076 "pending_bdev_io": 0, 00:19:03.076 "completed_nvme_io": 19413, 00:19:03.076 "transports": [ 00:19:03.076 { 00:19:03.076 "trtype": "TCP" 00:19:03.076 } 00:19:03.076 ] 00:19:03.076 }, 00:19:03.076 { 00:19:03.076 "name": "nvmf_tgt_poll_group_003", 00:19:03.076 "admin_qpairs": 0, 00:19:03.076 "io_qpairs": 1, 00:19:03.076 "current_admin_qpairs": 0, 00:19:03.076 "current_io_qpairs": 1, 00:19:03.076 "pending_bdev_io": 0, 00:19:03.076 "completed_nvme_io": 20246, 00:19:03.076 "transports": [ 00:19:03.076 { 00:19:03.076 "trtype": "TCP" 00:19:03.076 } 00:19:03.076 ] 00:19:03.076 } 00:19:03.076 ] 00:19:03.076 }' 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:03.076 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:03.334 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:03.334 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:03.334 20:17:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 250887 00:19:11.443 Initializing NVMe Controllers 00:19:11.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:11.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:11.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:11.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:11.444 Initialization complete. Launching workers. 00:19:11.444 ======================================================== 00:19:11.444 Latency(us) 00:19:11.444 Device Information : IOPS MiB/s Average min max 00:19:11.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10134.90 39.59 6314.97 2033.37 10642.08 00:19:11.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10326.10 40.34 6198.24 2447.75 9938.36 00:19:11.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10581.50 41.33 6050.37 2182.01 9862.60 00:19:11.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10486.60 40.96 6103.92 2341.81 10441.02 00:19:11.444 ======================================================== 00:19:11.444 Total : 41529.10 162.22 6165.23 2033.37 10642.08 00:19:11.444 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.444 rmmod nvme_tcp 00:19:11.444 rmmod nvme_fabrics 00:19:11.444 rmmod nvme_keyring 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 250735 ']' 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 250735 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 250735 ']' 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 250735 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 250735 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 250735' 00:19:11.444 killing process with pid 250735 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 250735 00:19:11.444 [2024-05-16 20:17:58.387182] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:11.444 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 250735 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.701 20:17:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.598 20:18:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:13.598 20:18:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:13.598 20:18:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:14.531 20:18:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:15.905 20:18:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.174 20:18:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:21.175 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:21.175 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:21.175 Found net devices under 0000:09:00.0: cvl_0_0 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:21.175 Found net devices under 0000:09:00.1: cvl_0_1 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.175 20:18:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:19:21.175 00:19:21.175 --- 10.0.0.2 ping statistics --- 00:19:21.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.175 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:21.175 00:19:21.175 --- 10.0.0.1 ping statistics --- 00:19:21.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.175 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:21.175 net.core.busy_poll = 1 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:21.175 net.core.busy_read = 1 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:21.175 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=253979 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 253979 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 253979 ']' 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.176 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.434 [2024-05-16 20:18:08.323920] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:21.434 [2024-05-16 20:18:08.324007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.434 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.434 [2024-05-16 20:18:08.386983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.434 [2024-05-16 20:18:08.496912] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.434 [2024-05-16 20:18:08.496981] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.434 [2024-05-16 20:18:08.496996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.434 [2024-05-16 20:18:08.497007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.434 [2024-05-16 20:18:08.497016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.434 [2024-05-16 20:18:08.497278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.434 [2024-05-16 20:18:08.497310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.434 [2024-05-16 20:18:08.497365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.434 [2024-05-16 20:18:08.497368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.434 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 [2024-05-16 20:18:08.718780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 Malloc1 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 [2024-05-16 20:18:08.771719] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:21.693 [2024-05-16 20:18:08.772042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=254063 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:21.693 20:18:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:21.693 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:24.221 "tick_rate": 2700000000, 00:19:24.221 "poll_groups": [ 00:19:24.221 { 00:19:24.221 "name": "nvmf_tgt_poll_group_000", 00:19:24.221 "admin_qpairs": 1, 00:19:24.221 "io_qpairs": 2, 00:19:24.221 "current_admin_qpairs": 1, 00:19:24.221 "current_io_qpairs": 2, 00:19:24.221 "pending_bdev_io": 0, 00:19:24.221 "completed_nvme_io": 26310, 00:19:24.221 "transports": [ 00:19:24.221 { 00:19:24.221 "trtype": "TCP" 00:19:24.221 } 00:19:24.221 ] 00:19:24.221 }, 00:19:24.221 { 00:19:24.221 "name": "nvmf_tgt_poll_group_001", 00:19:24.221 "admin_qpairs": 0, 00:19:24.221 "io_qpairs": 2, 00:19:24.221 "current_admin_qpairs": 0, 00:19:24.221 "current_io_qpairs": 2, 00:19:24.221 "pending_bdev_io": 0, 00:19:24.221 "completed_nvme_io": 26978, 00:19:24.221 "transports": [ 00:19:24.221 { 00:19:24.221 "trtype": "TCP" 00:19:24.221 } 00:19:24.221 ] 00:19:24.221 }, 00:19:24.221 { 00:19:24.221 "name": "nvmf_tgt_poll_group_002", 00:19:24.221 "admin_qpairs": 0, 00:19:24.221 "io_qpairs": 0, 00:19:24.221 "current_admin_qpairs": 0, 00:19:24.221 "current_io_qpairs": 0, 00:19:24.221 "pending_bdev_io": 0, 00:19:24.221 "completed_nvme_io": 0, 00:19:24.221 "transports": [ 00:19:24.221 { 00:19:24.221 "trtype": "TCP" 00:19:24.221 } 00:19:24.221 ] 00:19:24.221 }, 00:19:24.221 { 00:19:24.221 "name": "nvmf_tgt_poll_group_003", 00:19:24.221 "admin_qpairs": 0, 00:19:24.221 "io_qpairs": 0, 00:19:24.221 "current_admin_qpairs": 0, 00:19:24.221 "current_io_qpairs": 0, 00:19:24.221 "pending_bdev_io": 0, 00:19:24.221 "completed_nvme_io": 0, 00:19:24.221 "transports": [ 00:19:24.221 { 00:19:24.221 "trtype": "TCP" 00:19:24.221 } 00:19:24.221 ] 00:19:24.221 } 00:19:24.221 ] 00:19:24.221 }' 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:24.221 20:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 254063 00:19:32.413 Initializing NVMe Controllers 00:19:32.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:32.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:32.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:32.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:32.413 Initialization complete. Launching workers. 00:19:32.413 ======================================================== 00:19:32.413 Latency(us) 00:19:32.413 Device Information : IOPS MiB/s Average min max 00:19:32.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6009.26 23.47 10651.03 1668.70 53604.41 00:19:32.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7579.95 29.61 8454.20 1650.42 55429.79 00:19:32.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7813.05 30.52 8204.42 1793.15 54326.08 00:19:32.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6146.96 24.01 10411.33 1552.46 56088.61 00:19:32.413 ======================================================== 00:19:32.413 Total : 27549.21 107.61 9299.24 1552.46 56088.61 00:19:32.413 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.413 rmmod nvme_tcp 00:19:32.413 rmmod nvme_fabrics 00:19:32.413 rmmod nvme_keyring 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 253979 ']' 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 253979 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 253979 ']' 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 253979 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 253979 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 253979' 00:19:32.413 killing process with pid 253979 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 253979 00:19:32.413 [2024-05-16 20:18:18.963411] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:32.413 20:18:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 253979 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.413 20:18:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.314 20:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.314 20:18:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:34.314 00:19:34.314 real 0m44.080s 00:19:34.314 user 2m39.648s 00:19:34.314 sys 0m10.147s 00:19:34.314 20:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:34.314 20:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:34.314 ************************************ 00:19:34.314 END TEST nvmf_perf_adq 00:19:34.314 ************************************ 00:19:34.314 20:18:21 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:34.314 20:18:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:34.314 20:18:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:34.314 20:18:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.314 ************************************ 00:19:34.314 START TEST nvmf_shutdown 00:19:34.314 ************************************ 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:34.314 * Looking for test storage... 00:19:34.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:34.314 20:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 ************************************ 00:19:34.572 START TEST nvmf_shutdown_tc1 00:19:34.572 ************************************ 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.572 20:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:36.474 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:36.474 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:36.474 Found net devices under 0000:09:00.0: cvl_0_0 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:36.474 Found net devices under 0000:09:00.1: cvl_0_1 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.474 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:19:36.475 00:19:36.475 --- 10.0.0.2 ping statistics --- 00:19:36.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.475 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:19:36.475 00:19:36.475 --- 10.0.0.1 ping statistics --- 00:19:36.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.475 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=257312 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 257312 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 257312 ']' 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:36.475 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.734 [2024-05-16 20:18:23.621764] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:36.734 [2024-05-16 20:18:23.621876] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.734 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.734 [2024-05-16 20:18:23.685364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.734 [2024-05-16 20:18:23.792473] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.734 [2024-05-16 20:18:23.792527] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.734 [2024-05-16 20:18:23.792555] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.734 [2024-05-16 20:18:23.792567] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.734 [2024-05-16 20:18:23.792577] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.734 [2024-05-16 20:18:23.792675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.734 [2024-05-16 20:18:23.792740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.734 [2024-05-16 20:18:23.793048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:36.734 [2024-05-16 20:18:23.793053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.992 [2024-05-16 20:18:23.954794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.992 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.992 Malloc1 00:19:36.992 [2024-05-16 20:18:24.042139] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:36.992 [2024-05-16 20:18:24.042433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.992 Malloc2 00:19:36.992 Malloc3 00:19:37.249 Malloc4 00:19:37.249 Malloc5 00:19:37.249 Malloc6 00:19:37.249 Malloc7 00:19:37.249 Malloc8 00:19:37.507 Malloc9 00:19:37.507 Malloc10 00:19:37.507 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.507 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=257489 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 257489 /var/tmp/bdevperf.sock 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 257489 ']' 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:37.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.508 { 00:19:37.508 "params": { 00:19:37.508 "name": "Nvme$subsystem", 00:19:37.508 "trtype": "$TEST_TRANSPORT", 00:19:37.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.508 "adrfam": "ipv4", 00:19:37.508 "trsvcid": "$NVMF_PORT", 00:19:37.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.508 "hdgst": ${hdgst:-false}, 00:19:37.508 "ddgst": ${ddgst:-false} 00:19:37.508 }, 00:19:37.508 "method": "bdev_nvme_attach_controller" 00:19:37.508 } 00:19:37.508 EOF 00:19:37.508 )") 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.508 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:37.509 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:37.509 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme1", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme2", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme3", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme4", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme5", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme6", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme7", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme8", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme9", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 },{ 00:19:37.509 "params": { 00:19:37.509 "name": "Nvme10", 00:19:37.509 "trtype": "tcp", 00:19:37.509 "traddr": "10.0.0.2", 00:19:37.509 "adrfam": "ipv4", 00:19:37.509 "trsvcid": "4420", 00:19:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:37.509 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:37.509 "hdgst": false, 00:19:37.509 "ddgst": false 00:19:37.509 }, 00:19:37.509 "method": "bdev_nvme_attach_controller" 00:19:37.509 }' 00:19:37.509 [2024-05-16 20:18:24.552688] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:37.509 [2024-05-16 20:18:24.552758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:37.509 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.509 [2024-05-16 20:18:24.615272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.768 [2024-05-16 20:18:24.726675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 257489 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:39.141 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:40.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 257489 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:40.075 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 257312 00:19:40.075 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:40.075 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.075 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:40.075 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.076 { 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme$subsystem", 00:19:40.076 "trtype": "$TEST_TRANSPORT", 00:19:40.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "$NVMF_PORT", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.076 "hdgst": ${hdgst:-false}, 00:19:40.076 "ddgst": ${ddgst:-false} 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 } 00:19:40.076 EOF 00:19:40.076 )") 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:40.076 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme1", 00:19:40.076 "trtype": "tcp", 00:19:40.076 "traddr": "10.0.0.2", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "4420", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.076 "hdgst": false, 00:19:40.076 "ddgst": false 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 },{ 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme2", 00:19:40.076 "trtype": "tcp", 00:19:40.076 "traddr": "10.0.0.2", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "4420", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:40.076 "hdgst": false, 00:19:40.076 "ddgst": false 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 },{ 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme3", 00:19:40.076 "trtype": "tcp", 00:19:40.076 "traddr": "10.0.0.2", 00:19:40.076 "adrfam": "ipv4", 00:19:40.076 "trsvcid": "4420", 00:19:40.076 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:40.076 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:40.076 "hdgst": false, 00:19:40.076 "ddgst": false 00:19:40.076 }, 00:19:40.076 "method": "bdev_nvme_attach_controller" 00:19:40.076 },{ 00:19:40.076 "params": { 00:19:40.076 "name": "Nvme4", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 },{ 00:19:40.077 "params": { 00:19:40.077 "name": "Nvme5", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 },{ 00:19:40.077 "params": { 00:19:40.077 "name": "Nvme6", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 },{ 00:19:40.077 "params": { 00:19:40.077 "name": "Nvme7", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 },{ 00:19:40.077 "params": { 00:19:40.077 "name": "Nvme8", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 },{ 00:19:40.077 "params": { 00:19:40.077 "name": "Nvme9", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 },{ 00:19:40.077 "params": { 00:19:40.077 "name": "Nvme10", 00:19:40.077 "trtype": "tcp", 00:19:40.077 "traddr": "10.0.0.2", 00:19:40.077 "adrfam": "ipv4", 00:19:40.077 "trsvcid": "4420", 00:19:40.077 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:40.077 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:40.077 "hdgst": false, 00:19:40.077 "ddgst": false 00:19:40.077 }, 00:19:40.077 "method": "bdev_nvme_attach_controller" 00:19:40.077 }' 00:19:40.077 [2024-05-16 20:18:27.189739] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:40.077 [2024-05-16 20:18:27.189813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257784 ] 00:19:40.336 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.336 [2024-05-16 20:18:27.253657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.336 [2024-05-16 20:18:27.364555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.711 Running I/O for 1 seconds... 00:19:42.646 00:19:42.646 Latency(us) 00:19:42.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme1n1 : 1.15 221.68 13.86 0.00 0.00 283822.08 7378.87 260978.92 00:19:42.646 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme2n1 : 1.12 243.04 15.19 0.00 0.00 253657.78 8058.50 259425.47 00:19:42.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme3n1 : 1.05 243.41 15.21 0.00 0.00 250991.69 18738.44 257872.02 00:19:42.646 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme4n1 : 1.11 231.47 14.47 0.00 0.00 259947.90 20388.98 262532.36 00:19:42.646 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme5n1 : 1.17 219.19 13.70 0.00 0.00 270863.17 31263.10 276513.37 00:19:42.646 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme6n1 : 1.16 220.23 13.76 0.00 0.00 264824.04 21651.15 292047.83 00:19:42.646 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme7n1 : 1.12 232.69 14.54 0.00 0.00 244579.73 1201.49 259425.47 00:19:42.646 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme8n1 : 1.15 222.62 13.91 0.00 0.00 253044.81 39030.33 236123.78 00:19:42.646 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme9n1 : 1.17 277.51 17.34 0.00 0.00 199627.10 5534.15 217482.43 00:19:42.646 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.646 Verification LBA range: start 0x0 length 0x400 00:19:42.646 Nvme10n1 : 1.18 271.54 16.97 0.00 0.00 200982.45 10582.85 254765.13 00:19:42.646 =================================================================================================================== 00:19:42.646 Total : 2383.39 148.96 0.00 0.00 245909.78 1201.49 292047.83 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.213 rmmod nvme_tcp 00:19:43.213 rmmod nvme_fabrics 00:19:43.213 rmmod nvme_keyring 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 257312 ']' 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 257312 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 257312 ']' 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 257312 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 257312 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 257312' 00:19:43.213 killing process with pid 257312 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 257312 00:19:43.213 [2024-05-16 20:18:30.144033] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:43.213 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 257312 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.781 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.684 00:19:45.684 real 0m11.204s 00:19:45.684 user 0m31.492s 00:19:45.684 sys 0m3.036s 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.684 ************************************ 00:19:45.684 END TEST nvmf_shutdown_tc1 00:19:45.684 ************************************ 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:45.684 ************************************ 00:19:45.684 START TEST nvmf_shutdown_tc2 00:19:45.684 ************************************ 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:45.684 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:45.684 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.684 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:45.685 Found net devices under 0000:09:00.0: cvl_0_0 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:45.685 Found net devices under 0000:09:00.1: cvl_0_1 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.685 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:45.943 00:19:45.943 --- 10.0.0.2 ping statistics --- 00:19:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.943 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:19:45.943 00:19:45.943 --- 10.0.0.1 ping statistics --- 00:19:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.943 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=258550 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 258550 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 258550 ']' 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:45.943 20:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.943 [2024-05-16 20:18:32.970259] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:45.943 [2024-05-16 20:18:32.970338] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.943 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.943 [2024-05-16 20:18:33.041736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.201 [2024-05-16 20:18:33.161402] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.201 [2024-05-16 20:18:33.161464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.201 [2024-05-16 20:18:33.161481] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.201 [2024-05-16 20:18:33.161494] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.201 [2024-05-16 20:18:33.161506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.201 [2024-05-16 20:18:33.161611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.201 [2024-05-16 20:18:33.161708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.201 [2024-05-16 20:18:33.161776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.201 [2024-05-16 20:18:33.161773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:46.201 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.201 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:19:46.201 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.201 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 [2024-05-16 20:18:33.315659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.202 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.460 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.460 Malloc1 00:19:46.460 [2024-05-16 20:18:33.396028] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:46.460 [2024-05-16 20:18:33.396359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.460 Malloc2 00:19:46.460 Malloc3 00:19:46.460 Malloc4 00:19:46.460 Malloc5 00:19:46.717 Malloc6 00:19:46.717 Malloc7 00:19:46.717 Malloc8 00:19:46.717 Malloc9 00:19:46.717 Malloc10 00:19:46.717 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.717 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:46.717 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.717 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.717 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=258730 00:19:46.717 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 258730 /var/tmp/bdevperf.sock 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 258730 ']' 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.718 { 00:19:46.718 "params": { 00:19:46.718 "name": "Nvme$subsystem", 00:19:46.718 "trtype": "$TEST_TRANSPORT", 00:19:46.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.718 "adrfam": "ipv4", 00:19:46.718 "trsvcid": "$NVMF_PORT", 00:19:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.718 "hdgst": ${hdgst:-false}, 00:19:46.718 "ddgst": ${ddgst:-false} 00:19:46.718 }, 00:19:46.718 "method": "bdev_nvme_attach_controller" 00:19:46.718 } 00:19:46.718 EOF 00:19:46.718 )") 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.718 { 00:19:46.718 "params": { 00:19:46.718 "name": "Nvme$subsystem", 00:19:46.718 "trtype": "$TEST_TRANSPORT", 00:19:46.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.718 "adrfam": "ipv4", 00:19:46.718 "trsvcid": "$NVMF_PORT", 00:19:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.718 "hdgst": ${hdgst:-false}, 00:19:46.718 "ddgst": ${ddgst:-false} 00:19:46.718 }, 00:19:46.718 "method": "bdev_nvme_attach_controller" 00:19:46.718 } 00:19:46.718 EOF 00:19:46.718 )") 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.718 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.718 { 00:19:46.718 "params": { 00:19:46.718 "name": "Nvme$subsystem", 00:19:46.718 "trtype": "$TEST_TRANSPORT", 00:19:46.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.718 "adrfam": "ipv4", 00:19:46.718 "trsvcid": "$NVMF_PORT", 00:19:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.718 "hdgst": ${hdgst:-false}, 00:19:46.718 "ddgst": ${ddgst:-false} 00:19:46.718 }, 00:19:46.718 "method": "bdev_nvme_attach_controller" 00:19:46.718 } 00:19:46.718 EOF 00:19:46.718 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.977 { 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme$subsystem", 00:19:46.977 "trtype": "$TEST_TRANSPORT", 00:19:46.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "$NVMF_PORT", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.977 "hdgst": ${hdgst:-false}, 00:19:46.977 "ddgst": ${ddgst:-false} 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 } 00:19:46.977 EOF 00:19:46.977 )") 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:46.977 20:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme1", 00:19:46.977 "trtype": "tcp", 00:19:46.977 "traddr": "10.0.0.2", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "4420", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.977 "hdgst": false, 00:19:46.977 "ddgst": false 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 },{ 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme2", 00:19:46.977 "trtype": "tcp", 00:19:46.977 "traddr": "10.0.0.2", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "4420", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:46.977 "hdgst": false, 00:19:46.977 "ddgst": false 00:19:46.977 }, 00:19:46.977 "method": "bdev_nvme_attach_controller" 00:19:46.977 },{ 00:19:46.977 "params": { 00:19:46.977 "name": "Nvme3", 00:19:46.977 "trtype": "tcp", 00:19:46.977 "traddr": "10.0.0.2", 00:19:46.977 "adrfam": "ipv4", 00:19:46.977 "trsvcid": "4420", 00:19:46.977 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:46.977 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme4", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme5", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme6", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme7", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme8", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme9", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 },{ 00:19:46.978 "params": { 00:19:46.978 "name": "Nvme10", 00:19:46.978 "trtype": "tcp", 00:19:46.978 "traddr": "10.0.0.2", 00:19:46.978 "adrfam": "ipv4", 00:19:46.978 "trsvcid": "4420", 00:19:46.978 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:46.978 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:46.978 "hdgst": false, 00:19:46.978 "ddgst": false 00:19:46.978 }, 00:19:46.978 "method": "bdev_nvme_attach_controller" 00:19:46.978 }' 00:19:46.978 [2024-05-16 20:18:33.896452] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:46.978 [2024-05-16 20:18:33.896525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258730 ] 00:19:46.978 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.978 [2024-05-16 20:18:33.960882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.978 [2024-05-16 20:18:34.071810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.880 Running I/O for 10 seconds... 00:19:48.880 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:48.880 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:19:48.880 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=18 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 18 -ge 100 ']' 00:19:48.881 20:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:49.138 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:49.138 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:49.138 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 258730 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 258730 ']' 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 258730 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 258730 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 258730' 00:19:49.139 killing process with pid 258730 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 258730 00:19:49.139 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 258730 00:19:49.397 Received shutdown signal, test time was about 0.665323 seconds 00:19:49.397 00:19:49.397 Latency(us) 00:19:49.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme1n1 : 0.66 291.98 18.25 0.00 0.00 214373.77 18738.44 259425.47 00:19:49.397 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme2n1 : 0.63 211.01 13.19 0.00 0.00 283940.06 6019.60 250104.79 00:19:49.397 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme3n1 : 0.66 288.94 18.06 0.00 0.00 205612.63 16408.27 254765.13 00:19:49.397 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme4n1 : 0.66 299.38 18.71 0.00 0.00 191504.32 2621.44 260978.92 00:19:49.397 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme5n1 : 0.65 197.60 12.35 0.00 0.00 281593.17 21651.15 278066.82 00:19:49.397 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme6n1 : 0.64 198.94 12.43 0.00 0.00 270764.56 24855.13 251658.24 00:19:49.397 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme7n1 : 0.64 200.98 12.56 0.00 0.00 258406.78 22136.60 225249.66 00:19:49.397 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme8n1 : 0.62 205.74 12.86 0.00 0.00 242376.25 23495.87 233016.89 00:19:49.397 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme9n1 : 0.63 203.42 12.71 0.00 0.00 236926.29 21942.42 219035.88 00:19:49.397 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.397 Verification LBA range: start 0x0 length 0x400 00:19:49.397 Nvme10n1 : 0.65 195.76 12.23 0.00 0.00 240379.07 23981.32 293601.28 00:19:49.397 =================================================================================================================== 00:19:49.397 Total : 2293.76 143.36 0.00 0.00 237502.61 2621.44 293601.28 00:19:49.656 20:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 258550 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.591 rmmod nvme_tcp 00:19:50.591 rmmod nvme_fabrics 00:19:50.591 rmmod nvme_keyring 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 258550 ']' 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 258550 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 258550 ']' 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 258550 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 258550 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 258550' 00:19:50.591 killing process with pid 258550 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 258550 00:19:50.591 [2024-05-16 20:18:37.659858] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:50.591 20:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 258550 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.158 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.159 20:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:53.693 00:19:53.693 real 0m7.487s 00:19:53.693 user 0m22.065s 00:19:53.693 sys 0m1.358s 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.693 ************************************ 00:19:53.693 END TEST nvmf_shutdown_tc2 00:19:53.693 ************************************ 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:53.693 ************************************ 00:19:53.693 START TEST nvmf_shutdown_tc3 00:19:53.693 ************************************ 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.693 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:53.694 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:53.694 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:53.694 Found net devices under 0000:09:00.0: cvl_0_0 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:53.694 Found net devices under 0000:09:00.1: cvl_0_1 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:53.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:53.694 00:19:53.694 --- 10.0.0.2 ping statistics --- 00:19:53.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.694 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:19:53.694 00:19:53.694 --- 10.0.0.1 ping statistics --- 00:19:53.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.694 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.694 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=259641 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 259641 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 259641 ']' 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.695 [2024-05-16 20:18:40.501824] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:53.695 [2024-05-16 20:18:40.501943] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.695 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.695 [2024-05-16 20:18:40.568713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.695 [2024-05-16 20:18:40.675665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.695 [2024-05-16 20:18:40.675716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.695 [2024-05-16 20:18:40.675744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.695 [2024-05-16 20:18:40.675755] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.695 [2024-05-16 20:18:40.675764] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.695 [2024-05-16 20:18:40.675850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.695 [2024-05-16 20:18:40.675982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.695 [2024-05-16 20:18:40.676031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:53.695 [2024-05-16 20:18:40.676035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.695 [2024-05-16 20:18:40.817506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.695 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.954 20:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.954 Malloc1 00:19:53.954 [2024-05-16 20:18:40.891704] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:53.954 [2024-05-16 20:18:40.892023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.954 Malloc2 00:19:53.954 Malloc3 00:19:53.954 Malloc4 00:19:53.954 Malloc5 00:19:54.213 Malloc6 00:19:54.213 Malloc7 00:19:54.213 Malloc8 00:19:54.213 Malloc9 00:19:54.213 Malloc10 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=259816 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 259816 /var/tmp/bdevperf.sock 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 259816 ']' 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.213 { 00:19:54.213 "params": { 00:19:54.213 "name": "Nvme$subsystem", 00:19:54.213 "trtype": "$TEST_TRANSPORT", 00:19:54.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.213 "adrfam": "ipv4", 00:19:54.213 "trsvcid": "$NVMF_PORT", 00:19:54.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.213 "hdgst": ${hdgst:-false}, 00:19:54.213 "ddgst": ${ddgst:-false} 00:19:54.213 }, 00:19:54.213 "method": "bdev_nvme_attach_controller" 00:19:54.213 } 00:19:54.213 EOF 00:19:54.213 )") 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.213 { 00:19:54.213 "params": { 00:19:54.213 "name": "Nvme$subsystem", 00:19:54.213 "trtype": "$TEST_TRANSPORT", 00:19:54.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.213 "adrfam": "ipv4", 00:19:54.213 "trsvcid": "$NVMF_PORT", 00:19:54.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.213 "hdgst": ${hdgst:-false}, 00:19:54.213 "ddgst": ${ddgst:-false} 00:19:54.213 }, 00:19:54.213 "method": "bdev_nvme_attach_controller" 00:19:54.213 } 00:19:54.213 EOF 00:19:54.213 )") 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.213 { 00:19:54.213 "params": { 00:19:54.213 "name": "Nvme$subsystem", 00:19:54.213 "trtype": "$TEST_TRANSPORT", 00:19:54.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.213 "adrfam": "ipv4", 00:19:54.213 "trsvcid": "$NVMF_PORT", 00:19:54.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.213 "hdgst": ${hdgst:-false}, 00:19:54.213 "ddgst": ${ddgst:-false} 00:19:54.213 }, 00:19:54.213 "method": "bdev_nvme_attach_controller" 00:19:54.213 } 00:19:54.213 EOF 00:19:54.213 )") 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.213 { 00:19:54.213 "params": { 00:19:54.213 "name": "Nvme$subsystem", 00:19:54.213 "trtype": "$TEST_TRANSPORT", 00:19:54.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.213 "adrfam": "ipv4", 00:19:54.213 "trsvcid": "$NVMF_PORT", 00:19:54.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.213 "hdgst": ${hdgst:-false}, 00:19:54.213 "ddgst": ${ddgst:-false} 00:19:54.213 }, 00:19:54.213 "method": "bdev_nvme_attach_controller" 00:19:54.213 } 00:19:54.213 EOF 00:19:54.213 )") 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.213 { 00:19:54.213 "params": { 00:19:54.213 "name": "Nvme$subsystem", 00:19:54.213 "trtype": "$TEST_TRANSPORT", 00:19:54.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.213 "adrfam": "ipv4", 00:19:54.213 "trsvcid": "$NVMF_PORT", 00:19:54.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.213 "hdgst": ${hdgst:-false}, 00:19:54.213 "ddgst": ${ddgst:-false} 00:19:54.213 }, 00:19:54.213 "method": "bdev_nvme_attach_controller" 00:19:54.213 } 00:19:54.213 EOF 00:19:54.213 )") 00:19:54.213 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.472 { 00:19:54.472 "params": { 00:19:54.472 "name": "Nvme$subsystem", 00:19:54.472 "trtype": "$TEST_TRANSPORT", 00:19:54.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.472 "adrfam": "ipv4", 00:19:54.472 "trsvcid": "$NVMF_PORT", 00:19:54.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.472 "hdgst": ${hdgst:-false}, 00:19:54.472 "ddgst": ${ddgst:-false} 00:19:54.472 }, 00:19:54.472 "method": "bdev_nvme_attach_controller" 00:19:54.472 } 00:19:54.472 EOF 00:19:54.472 )") 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.472 { 00:19:54.472 "params": { 00:19:54.472 "name": "Nvme$subsystem", 00:19:54.472 "trtype": "$TEST_TRANSPORT", 00:19:54.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.472 "adrfam": "ipv4", 00:19:54.472 "trsvcid": "$NVMF_PORT", 00:19:54.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.472 "hdgst": ${hdgst:-false}, 00:19:54.472 "ddgst": ${ddgst:-false} 00:19:54.472 }, 00:19:54.472 "method": "bdev_nvme_attach_controller" 00:19:54.472 } 00:19:54.472 EOF 00:19:54.472 )") 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.472 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.472 { 00:19:54.472 "params": { 00:19:54.472 "name": "Nvme$subsystem", 00:19:54.472 "trtype": "$TEST_TRANSPORT", 00:19:54.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.472 "adrfam": "ipv4", 00:19:54.472 "trsvcid": "$NVMF_PORT", 00:19:54.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.472 "hdgst": ${hdgst:-false}, 00:19:54.472 "ddgst": ${ddgst:-false} 00:19:54.472 }, 00:19:54.472 "method": "bdev_nvme_attach_controller" 00:19:54.472 } 00:19:54.472 EOF 00:19:54.473 )") 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.473 { 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme$subsystem", 00:19:54.473 "trtype": "$TEST_TRANSPORT", 00:19:54.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "$NVMF_PORT", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.473 "hdgst": ${hdgst:-false}, 00:19:54.473 "ddgst": ${ddgst:-false} 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 } 00:19:54.473 EOF 00:19:54.473 )") 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.473 { 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme$subsystem", 00:19:54.473 "trtype": "$TEST_TRANSPORT", 00:19:54.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "$NVMF_PORT", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.473 "hdgst": ${hdgst:-false}, 00:19:54.473 "ddgst": ${ddgst:-false} 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 } 00:19:54.473 EOF 00:19:54.473 )") 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:54.473 20:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme1", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme2", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme3", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme4", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme5", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme6", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme7", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme8", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme9", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 },{ 00:19:54.473 "params": { 00:19:54.473 "name": "Nvme10", 00:19:54.473 "trtype": "tcp", 00:19:54.473 "traddr": "10.0.0.2", 00:19:54.473 "adrfam": "ipv4", 00:19:54.473 "trsvcid": "4420", 00:19:54.473 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:54.473 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:54.473 "hdgst": false, 00:19:54.473 "ddgst": false 00:19:54.473 }, 00:19:54.473 "method": "bdev_nvme_attach_controller" 00:19:54.473 }' 00:19:54.473 [2024-05-16 20:18:41.382363] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:54.473 [2024-05-16 20:18:41.382435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259816 ] 00:19:54.473 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.473 [2024-05-16 20:18:41.446419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.473 [2024-05-16 20:18:41.557222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.372 Running I/O for 10 seconds... 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:56.372 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 259641 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 259641 ']' 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 259641 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:56.644 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 259641 00:19:56.932 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:56.932 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:56.932 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 259641' 00:19:56.932 killing process with pid 259641 00:19:56.932 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 259641 00:19:56.932 [2024-05-16 20:18:43.790290] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:56.932 20:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 259641 00:19:56.932 [2024-05-16 20:18:43.791916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.791952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.791968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.791981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.932 [2024-05-16 20:18:43.792756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.792767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.792779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.792791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.792803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.792814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.792826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb780 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.794967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee180 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.795108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.933 [2024-05-16 20:18:43.795150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-05-16 20:18:43.795184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.933 [2024-05-16 20:18:43.795203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-05-16 20:18:43.795217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.933 [2024-05-16 20:18:43.795241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-05-16 20:18:43.795257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.933 [2024-05-16 20:18:43.795270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-05-16 20:18:43.795283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a4540 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.933 [2024-05-16 20:18:43.796723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.796996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.797465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ebc20 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.934 [2024-05-16 20:18:43.799995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.800429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec0c0 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.801991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.935 [2024-05-16 20:18:43.802374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.802619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ec580 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.803383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eca20 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.803419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eca20 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.803434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eca20 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.936 [2024-05-16 20:18:43.804725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.804990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecee0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.806985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.807081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed380 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.808346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.808373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.808387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.937 [2024-05-16 20:18:43.808399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.808998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.809145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24edcc0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.819338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17511b0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.819600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dad0 is same with the state(5) to be set 00:19:56.938 [2024-05-16 20:18:43.819768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.938 [2024-05-16 20:18:43.819903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.938 [2024-05-16 20:18:43.819916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857cf0 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.819966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.819986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecc20 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.820131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1859560 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.820294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e230 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.820462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d3270 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.820625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a23d0 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.820786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.939 [2024-05-16 20:18:43.820905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.820918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12911b0 is same with the state(5) to be set 00:19:56.939 [2024-05-16 20:18:43.820951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4540 (9): Bad file descriptor 00:19:56.939 [2024-05-16 20:18:43.821061] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.939 [2024-05-16 20:18:43.822179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.939 [2024-05-16 20:18:43.822615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.939 [2024-05-16 20:18:43.822631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.822980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.822994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.940 [2024-05-16 20:18:43.823790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.940 [2024-05-16 20:18:43.823806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.823820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.823836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.823851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.823875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.823889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.823906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.823921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.823937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.823951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.823967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.823981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.823997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ef80 is same with the state(5) to be set 00:19:56.941 [2024-05-16 20:18:43.824305] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x169ef80 was disconnected and freed. reset controller. 00:19:56.941 [2024-05-16 20:18:43.824459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.824985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.824999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.825015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.941 [2024-05-16 20:18:43.825029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.941 [2024-05-16 20:18:43.825045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.825974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.825988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.826017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.826048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.826077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.826106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.826136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.942 [2024-05-16 20:18:43.826165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.942 [2024-05-16 20:18:43.826180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.826433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.826448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843280 is same with the state(5) to be set 00:19:56.943 [2024-05-16 20:18:43.827587] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1843280 was disconnected and freed. reset controller. 00:19:56.943 [2024-05-16 20:18:43.827987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.943 [2024-05-16 20:18:43.828876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.943 [2024-05-16 20:18:43.828892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.828909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.828926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.828940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.828955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.828969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.828985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.829930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.944 [2024-05-16 20:18:43.829945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.944 [2024-05-16 20:18:43.830037] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e8560 was disconnected and freed. reset controller. 00:19:56.944 [2024-05-16 20:18:43.832627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17511b0 (9): Bad file descriptor 00:19:56.944 [2024-05-16 20:18:43.832675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dad0 (9): Bad file descriptor 00:19:56.944 [2024-05-16 20:18:43.832708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1857cf0 (9): Bad file descriptor 00:19:56.944 [2024-05-16 20:18:43.832734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ecc20 (9): Bad file descriptor 00:19:56.944 [2024-05-16 20:18:43.832757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1859560 (9): Bad file descriptor 00:19:56.944 [2024-05-16 20:18:43.832792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e230 (9): Bad file descriptor 00:19:56.945 [2024-05-16 20:18:43.832817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d3270 (9): Bad file descriptor 00:19:56.945 [2024-05-16 20:18:43.832844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a23d0 (9): Bad file descriptor 00:19:56.945 [2024-05-16 20:18:43.832891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12911b0 (9): Bad file descriptor 00:19:56.945 [2024-05-16 20:18:43.834601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:56.945 [2024-05-16 20:18:43.834705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.834978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.834992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.945 [2024-05-16 20:18:43.835791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.945 [2024-05-16 20:18:43.835809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.835825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.835840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.835862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.835879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.835895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.835908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.835924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.835938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.835953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.835967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.835983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.835997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.946 [2024-05-16 20:18:43.836664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.946 [2024-05-16 20:18:43.836679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184ac40 is same with the state(5) to be set 00:19:56.946 [2024-05-16 20:18:43.837978] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.946 [2024-05-16 20:18:43.838651] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.946 [2024-05-16 20:18:43.838972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:56.946 [2024-05-16 20:18:43.839004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:56.946 [2024-05-16 20:18:43.839023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.946 [2024-05-16 20:18:43.839196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.946 [2024-05-16 20:18:43.839226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e230 with addr=10.0.0.2, port=4420 00:19:56.946 [2024-05-16 20:18:43.839243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e230 is same with the state(5) to be set 00:19:56.946 [2024-05-16 20:18:43.839370] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.946 [2024-05-16 20:18:43.839448] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.946 [2024-05-16 20:18:43.839789] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.946 [2024-05-16 20:18:43.839879] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:56.946 [2024-05-16 20:18:43.840049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.946 [2024-05-16 20:18:43.840078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1857cf0 with addr=10.0.0.2, port=4420 00:19:56.946 [2024-05-16 20:18:43.840095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857cf0 is same with the state(5) to be set 00:19:56.946 [2024-05-16 20:18:43.840182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.947 [2024-05-16 20:18:43.840209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ecc20 with addr=10.0.0.2, port=4420 00:19:56.947 [2024-05-16 20:18:43.840225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecc20 is same with the state(5) to be set 00:19:56.947 [2024-05-16 20:18:43.840296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.947 [2024-05-16 20:18:43.840321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a4540 with addr=10.0.0.2, port=4420 00:19:56.947 [2024-05-16 20:18:43.840336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a4540 is same with the state(5) to be set 00:19:56.947 [2024-05-16 20:18:43.840359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e230 (9): Bad file descriptor 00:19:56.947 [2024-05-16 20:18:43.840751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1857cf0 (9): Bad file descriptor 00:19:56.947 [2024-05-16 20:18:43.840780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ecc20 (9): Bad file descriptor 00:19:56.947 [2024-05-16 20:18:43.840799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4540 (9): Bad file descriptor 00:19:56.947 [2024-05-16 20:18:43.840816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:56.947 [2024-05-16 20:18:43.840829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:56.947 [2024-05-16 20:18:43.840846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:56.947 [2024-05-16 20:18:43.840932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.947 [2024-05-16 20:18:43.840954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:56.947 [2024-05-16 20:18:43.840967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:56.947 [2024-05-16 20:18:43.840980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:56.947 [2024-05-16 20:18:43.840999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:56.947 [2024-05-16 20:18:43.841013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:56.947 [2024-05-16 20:18:43.841026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:56.947 [2024-05-16 20:18:43.841044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.947 [2024-05-16 20:18:43.841057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.947 [2024-05-16 20:18:43.841070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.947 [2024-05-16 20:18:43.841130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.947 [2024-05-16 20:18:43.841148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.947 [2024-05-16 20:18:43.841160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.947 [2024-05-16 20:18:43.842750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.842806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.842838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.842878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.842909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.842945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.842975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.842989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.947 [2024-05-16 20:18:43.843668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.947 [2024-05-16 20:18:43.843682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.843982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.843996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.844728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.844742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184bf40 is same with the state(5) to be set 00:19:56.948 [2024-05-16 20:18:43.846021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.846045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.846065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.846081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.846098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.846112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.846132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.846163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.846177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.846193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.948 [2024-05-16 20:18:43.846206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.948 [2024-05-16 20:18:43.846222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.846972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.949 [2024-05-16 20:18:43.847336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.949 [2024-05-16 20:18:43.847352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.847978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.847992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5b50 is same with the state(5) to be set 00:19:56.950 [2024-05-16 20:18:43.849240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.950 [2024-05-16 20:18:43.849603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.950 [2024-05-16 20:18:43.849618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.849977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.849993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.951 [2024-05-16 20:18:43.850718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.951 [2024-05-16 20:18:43.850732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.850981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.850997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.851192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.851207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7040 is same with the state(5) to be set 00:19:56.952 [2024-05-16 20:18:43.852451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.852971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.852987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.952 [2024-05-16 20:18:43.853179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.952 [2024-05-16 20:18:43.853195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.853972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.853989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.953 [2024-05-16 20:18:43.854346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.953 [2024-05-16 20:18:43.854360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.854376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.854390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.854404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169c540 is same with the state(5) to be set 00:19:56.954 [2024-05-16 20:18:43.855657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.855982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.954 [2024-05-16 20:18:43.856804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.954 [2024-05-16 20:18:43.856820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.856833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.856849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.856869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.856886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.856900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.856915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.856929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.856958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.856973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.856987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.857584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.857598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169da60 is same with the state(5) to be set 00:19:56.955 [2024-05-16 20:18:43.858836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.858865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.858890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.858906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.858922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.858936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.858951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.858965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.955 [2024-05-16 20:18:43.858981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.955 [2024-05-16 20:18:43.858995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.859977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.859991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.860006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.860021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.860036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.860050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.860066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.860079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.860095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.860109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.860124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.860138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.956 [2024-05-16 20:18:43.860153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.956 [2024-05-16 20:18:43.860167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.957 [2024-05-16 20:18:43.860761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.957 [2024-05-16 20:18:43.860774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0320 is same with the state(5) to be set 00:19:56.957 [2024-05-16 20:18:43.862929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:56.957 [2024-05-16 20:18:43.862964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:56.957 [2024-05-16 20:18:43.862982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:56.957 [2024-05-16 20:18:43.862999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:56.957 [2024-05-16 20:18:43.863132] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:56.957 [2024-05-16 20:18:43.863162] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:56.957 [2024-05-16 20:18:43.863264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:56.957 task offset: 16384 on job bdev=Nvme8n1 fails 00:19:56.957 00:19:56.957 Latency(us) 00:19:56.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.957 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme1n1 ended in about 0.78 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme1n1 : 0.78 171.44 10.72 82.50 0.00 248711.52 29515.47 248551.35 00:19:56.957 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme2n1 ended in about 0.78 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme2n1 : 0.78 163.30 10.21 81.65 0.00 251776.06 20971.52 264085.81 00:19:56.957 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme3n1 ended in about 0.79 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme3n1 : 0.79 162.63 10.16 81.32 0.00 246696.01 18155.90 267192.70 00:19:56.957 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme4n1 ended in about 0.79 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme4n1 : 0.79 161.97 10.12 80.99 0.00 241835.43 22136.60 228356.55 00:19:56.957 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme5n1 ended in about 0.77 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme5n1 : 0.77 165.74 10.36 82.87 0.00 229857.15 11505.21 260978.92 00:19:56.957 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme6n1 ended in about 0.79 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme6n1 : 0.79 161.32 10.08 80.66 0.00 231015.66 20194.80 264085.81 00:19:56.957 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme7n1 ended in about 0.80 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme7n1 : 0.80 165.70 10.36 80.34 0.00 221514.16 16602.45 259425.47 00:19:56.957 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme8n1 ended in about 0.77 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme8n1 : 0.77 166.38 10.40 83.19 0.00 211079.65 11602.30 268746.15 00:19:56.957 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme9n1 ended in about 0.80 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme9n1 : 0.80 80.02 5.00 80.02 0.00 323302.59 22039.51 298261.62 00:19:56.957 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.957 Job: Nvme10n1 ended in about 0.77 seconds with error 00:19:56.957 Verification LBA range: start 0x0 length 0x400 00:19:56.957 Nvme10n1 : 0.77 92.14 5.76 83.06 0.00 284457.52 18252.99 273406.48 00:19:56.957 =================================================================================================================== 00:19:56.957 Total : 1490.65 93.17 816.59 0.00 245216.46 11505.21 298261.62 00:19:56.957 [2024-05-16 20:18:43.892310] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:56.957 [2024-05-16 20:18:43.892405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:56.957 [2024-05-16 20:18:43.892697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.957 [2024-05-16 20:18:43.892733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a23d0 with addr=10.0.0.2, port=4420 00:19:56.957 [2024-05-16 20:18:43.892755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a23d0 is same with the state(5) to be set 00:19:56.957 [2024-05-16 20:18:43.892858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.957 [2024-05-16 20:18:43.892885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12911b0 with addr=10.0.0.2, port=4420 00:19:56.957 [2024-05-16 20:18:43.892901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12911b0 is same with the state(5) to be set 00:19:56.957 [2024-05-16 20:18:43.892992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.957 [2024-05-16 20:18:43.893018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d3270 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.893034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d3270 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.893113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.893138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1859560 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.893154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1859560 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.894838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:56.958 [2024-05-16 20:18:43.894873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.958 [2024-05-16 20:18:43.894895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:56.958 [2024-05-16 20:18:43.894912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:56.958 [2024-05-16 20:18:43.895057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.895086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186dad0 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.895103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dad0 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.895195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.895220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17511b0 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.895236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17511b0 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.895262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a23d0 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.895287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12911b0 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.895306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d3270 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.895323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1859560 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.895384] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:56.958 [2024-05-16 20:18:43.895407] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:56.958 [2024-05-16 20:18:43.895425] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:56.958 [2024-05-16 20:18:43.895443] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:56.958 [2024-05-16 20:18:43.895911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.895940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e230 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.895957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e230 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.896037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.896062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a4540 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.896077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a4540 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.896171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.896196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ecc20 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.896212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecc20 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.896287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.958 [2024-05-16 20:18:43.896312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1857cf0 with addr=10.0.0.2, port=4420 00:19:56.958 [2024-05-16 20:18:43.896327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857cf0 is same with the state(5) to be set 00:19:56.958 [2024-05-16 20:18:43.896351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dad0 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.896370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17511b0 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.896387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.896670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.896682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.896694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.896710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e230 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.896729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4540 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.896747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ecc20 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.896763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1857cf0 (9): Bad file descriptor 00:19:56.958 [2024-05-16 20:18:43.896779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.896911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.896929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:56.958 [2024-05-16 20:18:43.896971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.896985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.896998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.958 [2024-05-16 20:18:43.897013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.897026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.897038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:56.958 [2024-05-16 20:18:43.897053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:56.958 [2024-05-16 20:18:43.897066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:56.958 [2024-05-16 20:18:43.897078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:56.958 [2024-05-16 20:18:43.897121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.897139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.897151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.958 [2024-05-16 20:18:43.897162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.525 20:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:57.525 20:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 259816 00:19:58.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (259816) - No such process 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.461 rmmod nvme_tcp 00:19:58.461 rmmod nvme_fabrics 00:19:58.461 rmmod nvme_keyring 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.461 20:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.364 20:18:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.364 00:20:00.364 real 0m7.189s 00:20:00.364 user 0m16.941s 00:20:00.364 sys 0m1.345s 00:20:00.364 20:18:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:00.364 20:18:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 ************************************ 00:20:00.364 END TEST nvmf_shutdown_tc3 00:20:00.364 ************************************ 00:20:00.364 20:18:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:00.364 00:20:00.364 real 0m26.124s 00:20:00.364 user 1m10.596s 00:20:00.364 sys 0m5.893s 00:20:00.364 20:18:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:00.364 20:18:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 ************************************ 00:20:00.364 END TEST nvmf_shutdown 00:20:00.364 ************************************ 00:20:00.623 20:18:47 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.623 20:18:47 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.623 20:18:47 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:20:00.623 20:18:47 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:00.623 20:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.623 ************************************ 00:20:00.623 START TEST nvmf_multicontroller 00:20:00.623 ************************************ 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:00.623 * Looking for test storage... 00:20:00.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.623 20:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:02.526 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:02.527 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:02.527 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:02.527 Found net devices under 0000:09:00.0: cvl_0_0 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:02.527 Found net devices under 0000:09:00.1: cvl_0_1 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.527 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:20:02.785 00:20:02.785 --- 10.0.0.2 ping statistics --- 00:20:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.785 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:02.785 00:20:02.785 --- 10.0.0.1 ping statistics --- 00:20:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.785 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=262213 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 262213 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 262213 ']' 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.785 20:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.785 [2024-05-16 20:18:49.766753] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:02.785 [2024-05-16 20:18:49.766825] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.785 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.785 [2024-05-16 20:18:49.833306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.042 [2024-05-16 20:18:49.950738] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.042 [2024-05-16 20:18:49.950793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.042 [2024-05-16 20:18:49.950809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.042 [2024-05-16 20:18:49.950822] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.042 [2024-05-16 20:18:49.950843] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.042 [2024-05-16 20:18:49.950951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.042 [2024-05-16 20:18:49.951038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.042 [2024-05-16 20:18:49.951041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.606 [2024-05-16 20:18:50.744935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.606 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 Malloc0 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 [2024-05-16 20:18:50.803360] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:03.865 [2024-05-16 20:18:50.803637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 [2024-05-16 20:18:50.811465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 Malloc1 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=262365 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 262365 /var/tmp/bdevperf.sock 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 262365 ']' 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.865 20:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.123 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.123 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:04.123 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.124 NVMe0n1 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.124 1 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.124 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.382 request: 00:20:04.382 { 00:20:04.382 "name": "NVMe0", 00:20:04.382 "trtype": "tcp", 00:20:04.382 "traddr": "10.0.0.2", 00:20:04.382 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:04.382 "hostaddr": "10.0.0.2", 00:20:04.382 "hostsvcid": "60000", 00:20:04.382 "adrfam": "ipv4", 00:20:04.382 "trsvcid": "4420", 00:20:04.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.382 "method": "bdev_nvme_attach_controller", 00:20:04.382 "req_id": 1 00:20:04.382 } 00:20:04.382 Got JSON-RPC error response 00:20:04.382 response: 00:20:04.382 { 00:20:04.382 "code": -114, 00:20:04.382 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:04.382 } 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.382 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 request: 00:20:04.383 { 00:20:04.383 "name": "NVMe0", 00:20:04.383 "trtype": "tcp", 00:20:04.383 "traddr": "10.0.0.2", 00:20:04.383 "hostaddr": "10.0.0.2", 00:20:04.383 "hostsvcid": "60000", 00:20:04.383 "adrfam": "ipv4", 00:20:04.383 "trsvcid": "4420", 00:20:04.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.383 "method": "bdev_nvme_attach_controller", 00:20:04.383 "req_id": 1 00:20:04.383 } 00:20:04.383 Got JSON-RPC error response 00:20:04.383 response: 00:20:04.383 { 00:20:04.383 "code": -114, 00:20:04.383 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:04.383 } 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 request: 00:20:04.383 { 00:20:04.383 "name": "NVMe0", 00:20:04.383 "trtype": "tcp", 00:20:04.383 "traddr": "10.0.0.2", 00:20:04.383 "hostaddr": "10.0.0.2", 00:20:04.383 "hostsvcid": "60000", 00:20:04.383 "adrfam": "ipv4", 00:20:04.383 "trsvcid": "4420", 00:20:04.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.383 "multipath": "disable", 00:20:04.383 "method": "bdev_nvme_attach_controller", 00:20:04.383 "req_id": 1 00:20:04.383 } 00:20:04.383 Got JSON-RPC error response 00:20:04.383 response: 00:20:04.383 { 00:20:04.383 "code": -114, 00:20:04.383 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:04.383 } 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 request: 00:20:04.383 { 00:20:04.383 "name": "NVMe0", 00:20:04.383 "trtype": "tcp", 00:20:04.383 "traddr": "10.0.0.2", 00:20:04.383 "hostaddr": "10.0.0.2", 00:20:04.383 "hostsvcid": "60000", 00:20:04.383 "adrfam": "ipv4", 00:20:04.383 "trsvcid": "4420", 00:20:04.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.383 "multipath": "failover", 00:20:04.383 "method": "bdev_nvme_attach_controller", 00:20:04.383 "req_id": 1 00:20:04.383 } 00:20:04.383 Got JSON-RPC error response 00:20:04.383 response: 00:20:04.383 { 00:20:04.383 "code": -114, 00:20:04.383 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:04.383 } 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.641 00:20:04.641 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.641 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:04.641 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:04.641 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.641 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.641 20:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.642 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:04.642 20:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.016 0 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 262365 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 262365 ']' 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 262365 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 262365 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 262365' 00:20:06.016 killing process with pid 262365 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 262365 00:20:06.016 20:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 262365 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:20:06.016 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.016 [2024-05-16 20:18:50.914848] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:06.016 [2024-05-16 20:18:50.914935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262365 ] 00:20:06.016 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.016 [2024-05-16 20:18:50.973745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.016 [2024-05-16 20:18:51.083165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.016 [2024-05-16 20:18:51.613339] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 6f78adf9-cf6e-4aad-8127-b7a28440ee65 already exists 00:20:06.016 [2024-05-16 20:18:51.613377] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:6f78adf9-cf6e-4aad-8127-b7a28440ee65 alias for bdev NVMe1n1 00:20:06.016 [2024-05-16 20:18:51.613409] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:06.016 Running I/O for 1 seconds... 00:20:06.016 00:20:06.016 Latency(us) 00:20:06.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.016 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:06.016 NVMe0n1 : 1.00 19204.92 75.02 0.00 0.00 6654.46 2160.26 11553.75 00:20:06.016 =================================================================================================================== 00:20:06.016 Total : 19204.92 75.02 0.00 0.00 6654.46 2160.26 11553.75 00:20:06.016 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.016 00:20:06.016 Latency(us) 00:20:06.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.016 =================================================================================================================== 00:20:06.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.016 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.016 rmmod nvme_tcp 00:20:06.016 rmmod nvme_fabrics 00:20:06.016 rmmod nvme_keyring 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 262213 ']' 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 262213 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 262213 ']' 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 262213 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 262213 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 262213' 00:20:06.016 killing process with pid 262213 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 262213 00:20:06.016 [2024-05-16 20:18:53.121554] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:06.016 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 262213 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.583 20:18:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.484 20:18:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.484 00:20:08.484 real 0m7.884s 00:20:08.484 user 0m13.225s 00:20:08.484 sys 0m2.279s 00:20:08.484 20:18:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:08.484 20:18:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.484 ************************************ 00:20:08.484 END TEST nvmf_multicontroller 00:20:08.484 ************************************ 00:20:08.484 20:18:55 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:08.484 20:18:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:08.484 20:18:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:08.484 20:18:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.484 ************************************ 00:20:08.484 START TEST nvmf_aer 00:20:08.484 ************************************ 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:08.484 * Looking for test storage... 00:20:08.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.484 20:18:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.485 20:18:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:10.383 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:10.383 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:10.383 Found net devices under 0000:09:00.0: cvl_0_0 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.383 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:10.384 Found net devices under 0000:09:00.1: cvl_0_1 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.384 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:20:10.642 00:20:10.642 --- 10.0.0.2 ping statistics --- 00:20:10.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.642 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:10.642 00:20:10.642 --- 10.0.0.1 ping statistics --- 00:20:10.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.642 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=264574 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 264574 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 264574 ']' 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:10.642 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.643 [2024-05-16 20:18:57.685215] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:10.643 [2024-05-16 20:18:57.685293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.643 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.643 [2024-05-16 20:18:57.750119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.901 [2024-05-16 20:18:57.864603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.901 [2024-05-16 20:18:57.864671] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.901 [2024-05-16 20:18:57.864685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.901 [2024-05-16 20:18:57.864695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.901 [2024-05-16 20:18:57.864704] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.901 [2024-05-16 20:18:57.864803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.901 [2024-05-16 20:18:57.864921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.901 [2024-05-16 20:18:57.864949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.901 [2024-05-16 20:18:57.864952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.901 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:10.901 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:20:10.901 20:18:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.901 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.901 20:18:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.901 [2024-05-16 20:18:58.018696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.901 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.158 Malloc0 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.159 [2024-05-16 20:18:58.070069] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:11.159 [2024-05-16 20:18:58.070362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.159 [ 00:20:11.159 { 00:20:11.159 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.159 "subtype": "Discovery", 00:20:11.159 "listen_addresses": [], 00:20:11.159 "allow_any_host": true, 00:20:11.159 "hosts": [] 00:20:11.159 }, 00:20:11.159 { 00:20:11.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.159 "subtype": "NVMe", 00:20:11.159 "listen_addresses": [ 00:20:11.159 { 00:20:11.159 "trtype": "TCP", 00:20:11.159 "adrfam": "IPv4", 00:20:11.159 "traddr": "10.0.0.2", 00:20:11.159 "trsvcid": "4420" 00:20:11.159 } 00:20:11.159 ], 00:20:11.159 "allow_any_host": true, 00:20:11.159 "hosts": [], 00:20:11.159 "serial_number": "SPDK00000000000001", 00:20:11.159 "model_number": "SPDK bdev Controller", 00:20:11.159 "max_namespaces": 2, 00:20:11.159 "min_cntlid": 1, 00:20:11.159 "max_cntlid": 65519, 00:20:11.159 "namespaces": [ 00:20:11.159 { 00:20:11.159 "nsid": 1, 00:20:11.159 "bdev_name": "Malloc0", 00:20:11.159 "name": "Malloc0", 00:20:11.159 "nguid": "AB640532F6334F9595A7470758D5A3B0", 00:20:11.159 "uuid": "ab640532-f633-4f95-95a7-470758d5a3b0" 00:20:11.159 } 00:20:11.159 ] 00:20:11.159 } 00:20:11.159 ] 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=264676 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:11.159 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:20:11.159 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.417 Malloc1 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.417 Asynchronous Event Request test 00:20:11.417 Attaching to 10.0.0.2 00:20:11.417 Attached to 10.0.0.2 00:20:11.417 Registering asynchronous event callbacks... 00:20:11.417 Starting namespace attribute notice tests for all controllers... 00:20:11.417 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:11.417 aer_cb - Changed Namespace 00:20:11.417 Cleaning up... 00:20:11.417 [ 00:20:11.417 { 00:20:11.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.417 "subtype": "Discovery", 00:20:11.417 "listen_addresses": [], 00:20:11.417 "allow_any_host": true, 00:20:11.417 "hosts": [] 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.417 "subtype": "NVMe", 00:20:11.417 "listen_addresses": [ 00:20:11.417 { 00:20:11.417 "trtype": "TCP", 00:20:11.417 "adrfam": "IPv4", 00:20:11.417 "traddr": "10.0.0.2", 00:20:11.417 "trsvcid": "4420" 00:20:11.417 } 00:20:11.417 ], 00:20:11.417 "allow_any_host": true, 00:20:11.417 "hosts": [], 00:20:11.417 "serial_number": "SPDK00000000000001", 00:20:11.417 "model_number": "SPDK bdev Controller", 00:20:11.417 "max_namespaces": 2, 00:20:11.417 "min_cntlid": 1, 00:20:11.417 "max_cntlid": 65519, 00:20:11.417 "namespaces": [ 00:20:11.417 { 00:20:11.417 "nsid": 1, 00:20:11.417 "bdev_name": "Malloc0", 00:20:11.417 "name": "Malloc0", 00:20:11.417 "nguid": "AB640532F6334F9595A7470758D5A3B0", 00:20:11.417 "uuid": "ab640532-f633-4f95-95a7-470758d5a3b0" 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "nsid": 2, 00:20:11.417 "bdev_name": "Malloc1", 00:20:11.417 "name": "Malloc1", 00:20:11.417 "nguid": "D7C8FC6BF65F4203A596C95C0DBDAB42", 00:20:11.417 "uuid": "d7c8fc6b-f65f-4203-a596-c95c0dbdab42" 00:20:11.417 } 00:20:11.417 ] 00:20:11.417 } 00:20:11.417 ] 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 264676 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.417 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.418 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.418 rmmod nvme_tcp 00:20:11.676 rmmod nvme_fabrics 00:20:11.676 rmmod nvme_keyring 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 264574 ']' 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 264574 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 264574 ']' 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 264574 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 264574 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 264574' 00:20:11.676 killing process with pid 264574 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 264574 00:20:11.676 [2024-05-16 20:18:58.639063] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:11.676 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 264574 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.934 20:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.837 20:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.837 00:20:13.837 real 0m5.437s 00:20:13.837 user 0m4.740s 00:20:13.837 sys 0m1.805s 00:20:13.837 20:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:13.837 20:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:13.837 ************************************ 00:20:13.837 END TEST nvmf_aer 00:20:13.837 ************************************ 00:20:14.095 20:19:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.095 20:19:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:14.095 20:19:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:14.095 20:19:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.095 ************************************ 00:20:14.095 START TEST nvmf_async_init 00:20:14.095 ************************************ 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.095 * Looking for test storage... 00:20:14.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.095 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c7291b168c6c4a218123e6bbfba31476 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.096 20:19:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.011 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:16.011 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:16.011 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:16.011 Found net devices under 0000:09:00.0: cvl_0_0 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.011 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:16.012 Found net devices under 0000:09:00.1: cvl_0_1 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:20:16.012 00:20:16.012 --- 10.0.0.2 ping statistics --- 00:20:16.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.012 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:20:16.012 00:20:16.012 --- 10.0.0.1 ping statistics --- 00:20:16.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.012 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.012 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=266654 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 266654 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 266654 ']' 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.310 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.311 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.311 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.311 [2024-05-16 20:19:03.211420] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:16.311 [2024-05-16 20:19:03.211502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.311 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.311 [2024-05-16 20:19:03.280548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.311 [2024-05-16 20:19:03.398226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.311 [2024-05-16 20:19:03.398281] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.311 [2024-05-16 20:19:03.398306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.311 [2024-05-16 20:19:03.398319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.311 [2024-05-16 20:19:03.398331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.311 [2024-05-16 20:19:03.398370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 [2024-05-16 20:19:04.224165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 null0 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c7291b168c6c4a218123e6bbfba31476 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.298 [2024-05-16 20:19:04.264176] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:17.298 [2024-05-16 20:19:04.264453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.298 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.556 nvme0n1 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.556 [ 00:20:17.556 { 00:20:17.556 "name": "nvme0n1", 00:20:17.556 "aliases": [ 00:20:17.556 "c7291b16-8c6c-4a21-8123-e6bbfba31476" 00:20:17.556 ], 00:20:17.556 "product_name": "NVMe disk", 00:20:17.556 "block_size": 512, 00:20:17.556 "num_blocks": 2097152, 00:20:17.556 "uuid": "c7291b16-8c6c-4a21-8123-e6bbfba31476", 00:20:17.556 "assigned_rate_limits": { 00:20:17.556 "rw_ios_per_sec": 0, 00:20:17.556 "rw_mbytes_per_sec": 0, 00:20:17.556 "r_mbytes_per_sec": 0, 00:20:17.556 "w_mbytes_per_sec": 0 00:20:17.556 }, 00:20:17.556 "claimed": false, 00:20:17.556 "zoned": false, 00:20:17.556 "supported_io_types": { 00:20:17.556 "read": true, 00:20:17.556 "write": true, 00:20:17.556 "unmap": false, 00:20:17.556 "write_zeroes": true, 00:20:17.556 "flush": true, 00:20:17.556 "reset": true, 00:20:17.556 "compare": true, 00:20:17.556 "compare_and_write": true, 00:20:17.556 "abort": true, 00:20:17.556 "nvme_admin": true, 00:20:17.556 "nvme_io": true 00:20:17.556 }, 00:20:17.556 "memory_domains": [ 00:20:17.556 { 00:20:17.556 "dma_device_id": "system", 00:20:17.556 "dma_device_type": 1 00:20:17.556 } 00:20:17.556 ], 00:20:17.556 "driver_specific": { 00:20:17.556 "nvme": [ 00:20:17.556 { 00:20:17.556 "trid": { 00:20:17.556 "trtype": "TCP", 00:20:17.556 "adrfam": "IPv4", 00:20:17.556 "traddr": "10.0.0.2", 00:20:17.556 "trsvcid": "4420", 00:20:17.556 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.556 }, 00:20:17.556 "ctrlr_data": { 00:20:17.556 "cntlid": 1, 00:20:17.556 "vendor_id": "0x8086", 00:20:17.556 "model_number": "SPDK bdev Controller", 00:20:17.556 "serial_number": "00000000000000000000", 00:20:17.556 "firmware_revision": "24.09", 00:20:17.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.556 "oacs": { 00:20:17.556 "security": 0, 00:20:17.556 "format": 0, 00:20:17.556 "firmware": 0, 00:20:17.556 "ns_manage": 0 00:20:17.556 }, 00:20:17.556 "multi_ctrlr": true, 00:20:17.556 "ana_reporting": false 00:20:17.556 }, 00:20:17.556 "vs": { 00:20:17.556 "nvme_version": "1.3" 00:20:17.556 }, 00:20:17.556 "ns_data": { 00:20:17.556 "id": 1, 00:20:17.556 "can_share": true 00:20:17.556 } 00:20:17.556 } 00:20:17.556 ], 00:20:17.556 "mp_policy": "active_passive" 00:20:17.556 } 00:20:17.556 } 00:20:17.556 ] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.556 [2024-05-16 20:19:04.516977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:17.556 [2024-05-16 20:19:04.517051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223fcc0 (9): Bad file descriptor 00:20:17.556 [2024-05-16 20:19:04.659013] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.556 [ 00:20:17.556 { 00:20:17.556 "name": "nvme0n1", 00:20:17.556 "aliases": [ 00:20:17.556 "c7291b16-8c6c-4a21-8123-e6bbfba31476" 00:20:17.556 ], 00:20:17.556 "product_name": "NVMe disk", 00:20:17.556 "block_size": 512, 00:20:17.556 "num_blocks": 2097152, 00:20:17.556 "uuid": "c7291b16-8c6c-4a21-8123-e6bbfba31476", 00:20:17.556 "assigned_rate_limits": { 00:20:17.556 "rw_ios_per_sec": 0, 00:20:17.556 "rw_mbytes_per_sec": 0, 00:20:17.556 "r_mbytes_per_sec": 0, 00:20:17.556 "w_mbytes_per_sec": 0 00:20:17.556 }, 00:20:17.556 "claimed": false, 00:20:17.556 "zoned": false, 00:20:17.556 "supported_io_types": { 00:20:17.556 "read": true, 00:20:17.556 "write": true, 00:20:17.556 "unmap": false, 00:20:17.556 "write_zeroes": true, 00:20:17.556 "flush": true, 00:20:17.556 "reset": true, 00:20:17.556 "compare": true, 00:20:17.556 "compare_and_write": true, 00:20:17.556 "abort": true, 00:20:17.556 "nvme_admin": true, 00:20:17.556 "nvme_io": true 00:20:17.556 }, 00:20:17.556 "memory_domains": [ 00:20:17.556 { 00:20:17.556 "dma_device_id": "system", 00:20:17.556 "dma_device_type": 1 00:20:17.556 } 00:20:17.556 ], 00:20:17.556 "driver_specific": { 00:20:17.556 "nvme": [ 00:20:17.556 { 00:20:17.556 "trid": { 00:20:17.556 "trtype": "TCP", 00:20:17.556 "adrfam": "IPv4", 00:20:17.556 "traddr": "10.0.0.2", 00:20:17.556 "trsvcid": "4420", 00:20:17.556 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.556 }, 00:20:17.556 "ctrlr_data": { 00:20:17.556 "cntlid": 2, 00:20:17.556 "vendor_id": "0x8086", 00:20:17.556 "model_number": "SPDK bdev Controller", 00:20:17.556 "serial_number": "00000000000000000000", 00:20:17.556 "firmware_revision": "24.09", 00:20:17.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.556 "oacs": { 00:20:17.556 "security": 0, 00:20:17.556 "format": 0, 00:20:17.556 "firmware": 0, 00:20:17.556 "ns_manage": 0 00:20:17.556 }, 00:20:17.556 "multi_ctrlr": true, 00:20:17.556 "ana_reporting": false 00:20:17.556 }, 00:20:17.556 "vs": { 00:20:17.556 "nvme_version": "1.3" 00:20:17.556 }, 00:20:17.556 "ns_data": { 00:20:17.556 "id": 1, 00:20:17.556 "can_share": true 00:20:17.556 } 00:20:17.556 } 00:20:17.556 ], 00:20:17.556 "mp_policy": "active_passive" 00:20:17.556 } 00:20:17.556 } 00:20:17.556 ] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8Ytd3rowoS 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8Ytd3rowoS 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.556 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 [2024-05-16 20:19:04.713633] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.814 [2024-05-16 20:19:04.713769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Ytd3rowoS 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 [2024-05-16 20:19:04.721653] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Ytd3rowoS 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 [2024-05-16 20:19:04.729665] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.814 [2024-05-16 20:19:04.729727] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:17.814 nvme0n1 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 [ 00:20:17.814 { 00:20:17.814 "name": "nvme0n1", 00:20:17.814 "aliases": [ 00:20:17.814 "c7291b16-8c6c-4a21-8123-e6bbfba31476" 00:20:17.814 ], 00:20:17.814 "product_name": "NVMe disk", 00:20:17.814 "block_size": 512, 00:20:17.814 "num_blocks": 2097152, 00:20:17.814 "uuid": "c7291b16-8c6c-4a21-8123-e6bbfba31476", 00:20:17.814 "assigned_rate_limits": { 00:20:17.814 "rw_ios_per_sec": 0, 00:20:17.814 "rw_mbytes_per_sec": 0, 00:20:17.814 "r_mbytes_per_sec": 0, 00:20:17.814 "w_mbytes_per_sec": 0 00:20:17.814 }, 00:20:17.814 "claimed": false, 00:20:17.814 "zoned": false, 00:20:17.814 "supported_io_types": { 00:20:17.814 "read": true, 00:20:17.814 "write": true, 00:20:17.814 "unmap": false, 00:20:17.814 "write_zeroes": true, 00:20:17.814 "flush": true, 00:20:17.814 "reset": true, 00:20:17.814 "compare": true, 00:20:17.814 "compare_and_write": true, 00:20:17.814 "abort": true, 00:20:17.814 "nvme_admin": true, 00:20:17.814 "nvme_io": true 00:20:17.814 }, 00:20:17.814 "memory_domains": [ 00:20:17.814 { 00:20:17.814 "dma_device_id": "system", 00:20:17.814 "dma_device_type": 1 00:20:17.814 } 00:20:17.814 ], 00:20:17.814 "driver_specific": { 00:20:17.814 "nvme": [ 00:20:17.814 { 00:20:17.814 "trid": { 00:20:17.814 "trtype": "TCP", 00:20:17.814 "adrfam": "IPv4", 00:20:17.814 "traddr": "10.0.0.2", 00:20:17.814 "trsvcid": "4421", 00:20:17.814 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.814 }, 00:20:17.814 "ctrlr_data": { 00:20:17.814 "cntlid": 3, 00:20:17.814 "vendor_id": "0x8086", 00:20:17.814 "model_number": "SPDK bdev Controller", 00:20:17.814 "serial_number": "00000000000000000000", 00:20:17.814 "firmware_revision": "24.09", 00:20:17.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.814 "oacs": { 00:20:17.814 "security": 0, 00:20:17.814 "format": 0, 00:20:17.814 "firmware": 0, 00:20:17.814 "ns_manage": 0 00:20:17.814 }, 00:20:17.814 "multi_ctrlr": true, 00:20:17.814 "ana_reporting": false 00:20:17.814 }, 00:20:17.814 "vs": { 00:20:17.814 "nvme_version": "1.3" 00:20:17.814 }, 00:20:17.814 "ns_data": { 00:20:17.814 "id": 1, 00:20:17.814 "can_share": true 00:20:17.814 } 00:20:17.814 } 00:20:17.814 ], 00:20:17.814 "mp_policy": "active_passive" 00:20:17.814 } 00:20:17.814 } 00:20:17.814 ] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.8Ytd3rowoS 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.814 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.814 rmmod nvme_tcp 00:20:17.814 rmmod nvme_fabrics 00:20:17.814 rmmod nvme_keyring 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 266654 ']' 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 266654 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 266654 ']' 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 266654 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 266654 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 266654' 00:20:17.815 killing process with pid 266654 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 266654 00:20:17.815 [2024-05-16 20:19:04.914960] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:17.815 [2024-05-16 20:19:04.914994] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:17.815 [2024-05-16 20:19:04.915010] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.815 20:19:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 266654 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.073 20:19:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.616 20:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:20.616 00:20:20.616 real 0m6.189s 00:20:20.616 user 0m3.062s 00:20:20.616 sys 0m1.768s 00:20:20.616 20:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:20.616 20:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.616 ************************************ 00:20:20.616 END TEST nvmf_async_init 00:20:20.616 ************************************ 00:20:20.616 20:19:07 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.616 20:19:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:20.616 20:19:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:20.616 20:19:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.616 ************************************ 00:20:20.616 START TEST dma 00:20:20.616 ************************************ 00:20:20.616 20:19:07 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.616 * Looking for test storage... 00:20:20.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:20.616 20:19:07 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.616 20:19:07 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.616 20:19:07 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.616 20:19:07 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.616 20:19:07 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.616 20:19:07 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.616 20:19:07 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.616 20:19:07 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:20.616 20:19:07 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.616 20:19:07 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.616 20:19:07 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:20.616 20:19:07 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:20.616 00:20:20.616 real 0m0.066s 00:20:20.616 user 0m0.035s 00:20:20.616 sys 0m0.037s 00:20:20.616 20:19:07 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:20.616 20:19:07 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:20.617 ************************************ 00:20:20.617 END TEST dma 00:20:20.617 ************************************ 00:20:20.617 20:19:07 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.617 20:19:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:20.617 20:19:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:20.617 20:19:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.617 ************************************ 00:20:20.617 START TEST nvmf_identify 00:20:20.617 ************************************ 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.617 * Looking for test storage... 00:20:20.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:20.617 20:19:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:22.514 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:22.514 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:22.514 Found net devices under 0000:09:00.0: cvl_0_0 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:22.514 Found net devices under 0000:09:00.1: cvl_0_1 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:22.514 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:22.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:20:22.515 00:20:22.515 --- 10.0.0.2 ping statistics --- 00:20:22.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.515 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:20:22.515 00:20:22.515 --- 10.0.0.1 ping statistics --- 00:20:22.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.515 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=268897 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 268897 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 268897 ']' 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:22.515 20:19:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.515 [2024-05-16 20:19:09.585984] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:22.515 [2024-05-16 20:19:09.586055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.515 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.515 [2024-05-16 20:19:09.654674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.773 [2024-05-16 20:19:09.773773] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.773 [2024-05-16 20:19:09.773825] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.773 [2024-05-16 20:19:09.773851] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.773 [2024-05-16 20:19:09.773874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.773 [2024-05-16 20:19:09.773885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.773 [2024-05-16 20:19:09.773980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.773 [2024-05-16 20:19:09.774034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.773 [2024-05-16 20:19:09.774071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.773 [2024-05-16 20:19:09.774074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.726 [2024-05-16 20:19:10.562814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.726 Malloc0 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.726 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.726 [2024-05-16 20:19:10.634220] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:23.726 [2024-05-16 20:19:10.634525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.727 [ 00:20:23.727 { 00:20:23.727 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:23.727 "subtype": "Discovery", 00:20:23.727 "listen_addresses": [ 00:20:23.727 { 00:20:23.727 "trtype": "TCP", 00:20:23.727 "adrfam": "IPv4", 00:20:23.727 "traddr": "10.0.0.2", 00:20:23.727 "trsvcid": "4420" 00:20:23.727 } 00:20:23.727 ], 00:20:23.727 "allow_any_host": true, 00:20:23.727 "hosts": [] 00:20:23.727 }, 00:20:23.727 { 00:20:23.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.727 "subtype": "NVMe", 00:20:23.727 "listen_addresses": [ 00:20:23.727 { 00:20:23.727 "trtype": "TCP", 00:20:23.727 "adrfam": "IPv4", 00:20:23.727 "traddr": "10.0.0.2", 00:20:23.727 "trsvcid": "4420" 00:20:23.727 } 00:20:23.727 ], 00:20:23.727 "allow_any_host": true, 00:20:23.727 "hosts": [], 00:20:23.727 "serial_number": "SPDK00000000000001", 00:20:23.727 "model_number": "SPDK bdev Controller", 00:20:23.727 "max_namespaces": 32, 00:20:23.727 "min_cntlid": 1, 00:20:23.727 "max_cntlid": 65519, 00:20:23.727 "namespaces": [ 00:20:23.727 { 00:20:23.727 "nsid": 1, 00:20:23.727 "bdev_name": "Malloc0", 00:20:23.727 "name": "Malloc0", 00:20:23.727 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:23.727 "eui64": "ABCDEF0123456789", 00:20:23.727 "uuid": "4f27d425-5ff5-405b-9600-1b4475780a0b" 00:20:23.727 } 00:20:23.727 ] 00:20:23.727 } 00:20:23.727 ] 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.727 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:23.727 [2024-05-16 20:19:10.673742] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:23.727 [2024-05-16 20:19:10.673779] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid269051 ] 00:20:23.727 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.727 [2024-05-16 20:19:10.707250] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:23.727 [2024-05-16 20:19:10.707311] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.727 [2024-05-16 20:19:10.707321] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.727 [2024-05-16 20:19:10.707336] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.727 [2024-05-16 20:19:10.707349] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.727 [2024-05-16 20:19:10.710902] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:23.727 [2024-05-16 20:19:10.710973] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1da6540 0 00:20:23.727 [2024-05-16 20:19:10.717864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.727 [2024-05-16 20:19:10.717903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.727 [2024-05-16 20:19:10.717913] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.727 [2024-05-16 20:19:10.717920] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.727 [2024-05-16 20:19:10.717986] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.717999] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.718008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.727 [2024-05-16 20:19:10.718026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.727 [2024-05-16 20:19:10.718053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.727 [2024-05-16 20:19:10.725881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.727 [2024-05-16 20:19:10.725899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.727 [2024-05-16 20:19:10.725907] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.725914] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.727 [2024-05-16 20:19:10.725936] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.727 [2024-05-16 20:19:10.725964] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:23.727 [2024-05-16 20:19:10.725974] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:23.727 [2024-05-16 20:19:10.725996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.727 [2024-05-16 20:19:10.726023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.727 [2024-05-16 20:19:10.726046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.727 [2024-05-16 20:19:10.726153] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.727 [2024-05-16 20:19:10.726168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.727 [2024-05-16 20:19:10.726175] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726182] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.727 [2024-05-16 20:19:10.726193] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:23.727 [2024-05-16 20:19:10.726206] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:23.727 [2024-05-16 20:19:10.726218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726226] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.727 [2024-05-16 20:19:10.726243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.727 [2024-05-16 20:19:10.726263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.727 [2024-05-16 20:19:10.726344] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.727 [2024-05-16 20:19:10.726356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.727 [2024-05-16 20:19:10.726367] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.727 [2024-05-16 20:19:10.726385] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:23.727 [2024-05-16 20:19:10.726399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.727 [2024-05-16 20:19:10.726412] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.727 [2024-05-16 20:19:10.726436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.727 [2024-05-16 20:19:10.726457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.727 [2024-05-16 20:19:10.726526] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.727 [2024-05-16 20:19:10.726539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.727 [2024-05-16 20:19:10.726546] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726553] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.727 [2024-05-16 20:19:10.726563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.727 [2024-05-16 20:19:10.726579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.727 [2024-05-16 20:19:10.726606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.727 [2024-05-16 20:19:10.726625] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.727 [2024-05-16 20:19:10.726692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.727 [2024-05-16 20:19:10.726705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.727 [2024-05-16 20:19:10.726712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726718] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.727 [2024-05-16 20:19:10.726728] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:23.727 [2024-05-16 20:19:10.726737] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:23.727 [2024-05-16 20:19:10.726750] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.727 [2024-05-16 20:19:10.726861] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:23.727 [2024-05-16 20:19:10.726872] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.727 [2024-05-16 20:19:10.726887] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.727 [2024-05-16 20:19:10.726901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.727 [2024-05-16 20:19:10.726911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.727 [2024-05-16 20:19:10.726937] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.728 [2024-05-16 20:19:10.727021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.728 [2024-05-16 20:19:10.727034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.728 [2024-05-16 20:19:10.727041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.728 [2024-05-16 20:19:10.727057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.728 [2024-05-16 20:19:10.727073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.727099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.728 [2024-05-16 20:19:10.727119] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.728 [2024-05-16 20:19:10.727196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.728 [2024-05-16 20:19:10.727209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.728 [2024-05-16 20:19:10.727216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727223] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.728 [2024-05-16 20:19:10.727233] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.728 [2024-05-16 20:19:10.727241] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:23.728 [2024-05-16 20:19:10.727255] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:23.728 [2024-05-16 20:19:10.727270] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.728 [2024-05-16 20:19:10.727285] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.727304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.728 [2024-05-16 20:19:10.727325] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.728 [2024-05-16 20:19:10.727439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.728 [2024-05-16 20:19:10.727451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.728 [2024-05-16 20:19:10.727459] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727466] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da6540): datao=0, datal=4096, cccid=0 00:20:23.728 [2024-05-16 20:19:10.727474] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e063a0) on tqpair(0x1da6540): expected_datao=0, payload_size=4096 00:20:23.728 [2024-05-16 20:19:10.727482] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727500] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.727511] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.767964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.728 [2024-05-16 20:19:10.767983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.728 [2024-05-16 20:19:10.767991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.728 [2024-05-16 20:19:10.768017] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:23.728 [2024-05-16 20:19:10.768027] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:23.728 [2024-05-16 20:19:10.768034] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:23.728 [2024-05-16 20:19:10.768048] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:23.728 [2024-05-16 20:19:10.768058] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:23.728 [2024-05-16 20:19:10.768066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:23.728 [2024-05-16 20:19:10.768082] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.728 [2024-05-16 20:19:10.768095] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768103] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.728 [2024-05-16 20:19:10.768143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.728 [2024-05-16 20:19:10.768231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.728 [2024-05-16 20:19:10.768245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.728 [2024-05-16 20:19:10.768252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e063a0) on tqpair=0x1da6540 00:20:23.728 [2024-05-16 20:19:10.768272] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768280] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.728 [2024-05-16 20:19:10.768307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768314] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768320] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.728 [2024-05-16 20:19:10.768338] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.728 [2024-05-16 20:19:10.768370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768377] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.728 [2024-05-16 20:19:10.768404] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.728 [2024-05-16 20:19:10.768424] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.728 [2024-05-16 20:19:10.768437] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.728 [2024-05-16 20:19:10.768477] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e063a0, cid 0, qid 0 00:20:23.728 [2024-05-16 20:19:10.768504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06500, cid 1, qid 0 00:20:23.728 [2024-05-16 20:19:10.768511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06660, cid 2, qid 0 00:20:23.728 [2024-05-16 20:19:10.768519] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.728 [2024-05-16 20:19:10.768526] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06920, cid 4, qid 0 00:20:23.728 [2024-05-16 20:19:10.768719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.728 [2024-05-16 20:19:10.768734] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.728 [2024-05-16 20:19:10.768741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768747] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06920) on tqpair=0x1da6540 00:20:23.728 [2024-05-16 20:19:10.768758] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:23.728 [2024-05-16 20:19:10.768767] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:23.728 [2024-05-16 20:19:10.768786] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.768806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.728 [2024-05-16 20:19:10.768827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06920, cid 4, qid 0 00:20:23.728 [2024-05-16 20:19:10.768937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.728 [2024-05-16 20:19:10.768952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.728 [2024-05-16 20:19:10.768959] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768965] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da6540): datao=0, datal=4096, cccid=4 00:20:23.728 [2024-05-16 20:19:10.768973] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06920) on tqpair(0x1da6540): expected_datao=0, payload_size=4096 00:20:23.728 [2024-05-16 20:19:10.768981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768991] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.768999] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.769011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.728 [2024-05-16 20:19:10.769021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.728 [2024-05-16 20:19:10.769028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.769034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06920) on tqpair=0x1da6540 00:20:23.728 [2024-05-16 20:19:10.769056] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:23.728 [2024-05-16 20:19:10.769099] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.728 [2024-05-16 20:19:10.769111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da6540) 00:20:23.728 [2024-05-16 20:19:10.769122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.729 [2024-05-16 20:19:10.769133] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769140] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769146] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1da6540) 00:20:23.729 [2024-05-16 20:19:10.769155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.729 [2024-05-16 20:19:10.769182] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06920, cid 4, qid 0 00:20:23.729 [2024-05-16 20:19:10.769194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a80, cid 5, qid 0 00:20:23.729 [2024-05-16 20:19:10.769312] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.729 [2024-05-16 20:19:10.769324] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.729 [2024-05-16 20:19:10.769332] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769338] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da6540): datao=0, datal=1024, cccid=4 00:20:23.729 [2024-05-16 20:19:10.769346] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06920) on tqpair(0x1da6540): expected_datao=0, payload_size=1024 00:20:23.729 [2024-05-16 20:19:10.769353] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769363] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769370] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.729 [2024-05-16 20:19:10.769388] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.729 [2024-05-16 20:19:10.769395] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.769401] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a80) on tqpair=0x1da6540 00:20:23.729 [2024-05-16 20:19:10.811880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.729 [2024-05-16 20:19:10.811899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.729 [2024-05-16 20:19:10.811907] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.811913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06920) on tqpair=0x1da6540 00:20:23.729 [2024-05-16 20:19:10.811939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.811950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da6540) 00:20:23.729 [2024-05-16 20:19:10.811961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.729 [2024-05-16 20:19:10.812006] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06920, cid 4, qid 0 00:20:23.729 [2024-05-16 20:19:10.812184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.729 [2024-05-16 20:19:10.812197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.729 [2024-05-16 20:19:10.812204] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.812211] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da6540): datao=0, datal=3072, cccid=4 00:20:23.729 [2024-05-16 20:19:10.812218] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06920) on tqpair(0x1da6540): expected_datao=0, payload_size=3072 00:20:23.729 [2024-05-16 20:19:10.812226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.812246] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.812260] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.852924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.729 [2024-05-16 20:19:10.852943] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.729 [2024-05-16 20:19:10.852951] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.852958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06920) on tqpair=0x1da6540 00:20:23.729 [2024-05-16 20:19:10.852976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.852985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da6540) 00:20:23.729 [2024-05-16 20:19:10.852996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.729 [2024-05-16 20:19:10.853025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06920, cid 4, qid 0 00:20:23.729 [2024-05-16 20:19:10.853113] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.729 [2024-05-16 20:19:10.853127] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.729 [2024-05-16 20:19:10.853134] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.853141] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da6540): datao=0, datal=8, cccid=4 00:20:23.729 [2024-05-16 20:19:10.853149] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06920) on tqpair(0x1da6540): expected_datao=0, payload_size=8 00:20:23.729 [2024-05-16 20:19:10.853156] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.853166] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.729 [2024-05-16 20:19:10.853174] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.989 [2024-05-16 20:19:10.893934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.989 [2024-05-16 20:19:10.893954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.989 [2024-05-16 20:19:10.893962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.989 [2024-05-16 20:19:10.893969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06920) on tqpair=0x1da6540 00:20:23.989 ===================================================== 00:20:23.989 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:23.989 ===================================================== 00:20:23.989 Controller Capabilities/Features 00:20:23.989 ================================ 00:20:23.989 Vendor ID: 0000 00:20:23.989 Subsystem Vendor ID: 0000 00:20:23.989 Serial Number: .................... 00:20:23.989 Model Number: ........................................ 00:20:23.989 Firmware Version: 24.09 00:20:23.989 Recommended Arb Burst: 0 00:20:23.989 IEEE OUI Identifier: 00 00 00 00:20:23.989 Multi-path I/O 00:20:23.989 May have multiple subsystem ports: No 00:20:23.989 May have multiple controllers: No 00:20:23.989 Associated with SR-IOV VF: No 00:20:23.989 Max Data Transfer Size: 131072 00:20:23.989 Max Number of Namespaces: 0 00:20:23.989 Max Number of I/O Queues: 1024 00:20:23.989 NVMe Specification Version (VS): 1.3 00:20:23.989 NVMe Specification Version (Identify): 1.3 00:20:23.989 Maximum Queue Entries: 128 00:20:23.989 Contiguous Queues Required: Yes 00:20:23.989 Arbitration Mechanisms Supported 00:20:23.989 Weighted Round Robin: Not Supported 00:20:23.989 Vendor Specific: Not Supported 00:20:23.989 Reset Timeout: 15000 ms 00:20:23.989 Doorbell Stride: 4 bytes 00:20:23.989 NVM Subsystem Reset: Not Supported 00:20:23.989 Command Sets Supported 00:20:23.989 NVM Command Set: Supported 00:20:23.989 Boot Partition: Not Supported 00:20:23.989 Memory Page Size Minimum: 4096 bytes 00:20:23.989 Memory Page Size Maximum: 4096 bytes 00:20:23.989 Persistent Memory Region: Not Supported 00:20:23.989 Optional Asynchronous Events Supported 00:20:23.989 Namespace Attribute Notices: Not Supported 00:20:23.989 Firmware Activation Notices: Not Supported 00:20:23.989 ANA Change Notices: Not Supported 00:20:23.989 PLE Aggregate Log Change Notices: Not Supported 00:20:23.989 LBA Status Info Alert Notices: Not Supported 00:20:23.989 EGE Aggregate Log Change Notices: Not Supported 00:20:23.989 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.989 Zone Descriptor Change Notices: Not Supported 00:20:23.989 Discovery Log Change Notices: Supported 00:20:23.989 Controller Attributes 00:20:23.989 128-bit Host Identifier: Not Supported 00:20:23.989 Non-Operational Permissive Mode: Not Supported 00:20:23.989 NVM Sets: Not Supported 00:20:23.989 Read Recovery Levels: Not Supported 00:20:23.989 Endurance Groups: Not Supported 00:20:23.989 Predictable Latency Mode: Not Supported 00:20:23.989 Traffic Based Keep ALive: Not Supported 00:20:23.989 Namespace Granularity: Not Supported 00:20:23.989 SQ Associations: Not Supported 00:20:23.989 UUID List: Not Supported 00:20:23.989 Multi-Domain Subsystem: Not Supported 00:20:23.989 Fixed Capacity Management: Not Supported 00:20:23.989 Variable Capacity Management: Not Supported 00:20:23.989 Delete Endurance Group: Not Supported 00:20:23.989 Delete NVM Set: Not Supported 00:20:23.989 Extended LBA Formats Supported: Not Supported 00:20:23.989 Flexible Data Placement Supported: Not Supported 00:20:23.989 00:20:23.989 Controller Memory Buffer Support 00:20:23.989 ================================ 00:20:23.989 Supported: No 00:20:23.989 00:20:23.989 Persistent Memory Region Support 00:20:23.989 ================================ 00:20:23.989 Supported: No 00:20:23.989 00:20:23.989 Admin Command Set Attributes 00:20:23.989 ============================ 00:20:23.989 Security Send/Receive: Not Supported 00:20:23.989 Format NVM: Not Supported 00:20:23.989 Firmware Activate/Download: Not Supported 00:20:23.989 Namespace Management: Not Supported 00:20:23.989 Device Self-Test: Not Supported 00:20:23.989 Directives: Not Supported 00:20:23.989 NVMe-MI: Not Supported 00:20:23.989 Virtualization Management: Not Supported 00:20:23.989 Doorbell Buffer Config: Not Supported 00:20:23.989 Get LBA Status Capability: Not Supported 00:20:23.989 Command & Feature Lockdown Capability: Not Supported 00:20:23.989 Abort Command Limit: 1 00:20:23.989 Async Event Request Limit: 4 00:20:23.989 Number of Firmware Slots: N/A 00:20:23.989 Firmware Slot 1 Read-Only: N/A 00:20:23.989 Firmware Activation Without Reset: N/A 00:20:23.989 Multiple Update Detection Support: N/A 00:20:23.989 Firmware Update Granularity: No Information Provided 00:20:23.989 Per-Namespace SMART Log: No 00:20:23.989 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.989 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:23.990 Command Effects Log Page: Not Supported 00:20:23.990 Get Log Page Extended Data: Supported 00:20:23.990 Telemetry Log Pages: Not Supported 00:20:23.990 Persistent Event Log Pages: Not Supported 00:20:23.990 Supported Log Pages Log Page: May Support 00:20:23.990 Commands Supported & Effects Log Page: Not Supported 00:20:23.990 Feature Identifiers & Effects Log Page:May Support 00:20:23.990 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.990 Data Area 4 for Telemetry Log: Not Supported 00:20:23.990 Error Log Page Entries Supported: 128 00:20:23.990 Keep Alive: Not Supported 00:20:23.990 00:20:23.990 NVM Command Set Attributes 00:20:23.990 ========================== 00:20:23.990 Submission Queue Entry Size 00:20:23.990 Max: 1 00:20:23.990 Min: 1 00:20:23.990 Completion Queue Entry Size 00:20:23.990 Max: 1 00:20:23.990 Min: 1 00:20:23.990 Number of Namespaces: 0 00:20:23.990 Compare Command: Not Supported 00:20:23.990 Write Uncorrectable Command: Not Supported 00:20:23.990 Dataset Management Command: Not Supported 00:20:23.990 Write Zeroes Command: Not Supported 00:20:23.990 Set Features Save Field: Not Supported 00:20:23.990 Reservations: Not Supported 00:20:23.990 Timestamp: Not Supported 00:20:23.990 Copy: Not Supported 00:20:23.990 Volatile Write Cache: Not Present 00:20:23.990 Atomic Write Unit (Normal): 1 00:20:23.990 Atomic Write Unit (PFail): 1 00:20:23.990 Atomic Compare & Write Unit: 1 00:20:23.990 Fused Compare & Write: Supported 00:20:23.990 Scatter-Gather List 00:20:23.990 SGL Command Set: Supported 00:20:23.990 SGL Keyed: Supported 00:20:23.990 SGL Bit Bucket Descriptor: Not Supported 00:20:23.990 SGL Metadata Pointer: Not Supported 00:20:23.990 Oversized SGL: Not Supported 00:20:23.990 SGL Metadata Address: Not Supported 00:20:23.990 SGL Offset: Supported 00:20:23.990 Transport SGL Data Block: Not Supported 00:20:23.990 Replay Protected Memory Block: Not Supported 00:20:23.990 00:20:23.990 Firmware Slot Information 00:20:23.990 ========================= 00:20:23.990 Active slot: 0 00:20:23.990 00:20:23.990 00:20:23.990 Error Log 00:20:23.990 ========= 00:20:23.990 00:20:23.990 Active Namespaces 00:20:23.990 ================= 00:20:23.990 Discovery Log Page 00:20:23.990 ================== 00:20:23.990 Generation Counter: 2 00:20:23.990 Number of Records: 2 00:20:23.990 Record Format: 0 00:20:23.990 00:20:23.990 Discovery Log Entry 0 00:20:23.990 ---------------------- 00:20:23.990 Transport Type: 3 (TCP) 00:20:23.990 Address Family: 1 (IPv4) 00:20:23.990 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:23.990 Entry Flags: 00:20:23.990 Duplicate Returned Information: 1 00:20:23.990 Explicit Persistent Connection Support for Discovery: 1 00:20:23.990 Transport Requirements: 00:20:23.990 Secure Channel: Not Required 00:20:23.990 Port ID: 0 (0x0000) 00:20:23.990 Controller ID: 65535 (0xffff) 00:20:23.990 Admin Max SQ Size: 128 00:20:23.990 Transport Service Identifier: 4420 00:20:23.990 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:23.990 Transport Address: 10.0.0.2 00:20:23.990 Discovery Log Entry 1 00:20:23.990 ---------------------- 00:20:23.990 Transport Type: 3 (TCP) 00:20:23.990 Address Family: 1 (IPv4) 00:20:23.990 Subsystem Type: 2 (NVM Subsystem) 00:20:23.990 Entry Flags: 00:20:23.990 Duplicate Returned Information: 0 00:20:23.990 Explicit Persistent Connection Support for Discovery: 0 00:20:23.990 Transport Requirements: 00:20:23.990 Secure Channel: Not Required 00:20:23.990 Port ID: 0 (0x0000) 00:20:23.990 Controller ID: 65535 (0xffff) 00:20:23.990 Admin Max SQ Size: 128 00:20:23.990 Transport Service Identifier: 4420 00:20:23.990 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:23.990 Transport Address: 10.0.0.2 [2024-05-16 20:19:10.894080] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:23.990 [2024-05-16 20:19:10.894106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.990 [2024-05-16 20:19:10.894119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.990 [2024-05-16 20:19:10.894129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.990 [2024-05-16 20:19:10.894139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.990 [2024-05-16 20:19:10.894155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894170] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.990 [2024-05-16 20:19:10.894181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.990 [2024-05-16 20:19:10.894207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.990 [2024-05-16 20:19:10.894387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.990 [2024-05-16 20:19:10.894401] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.990 [2024-05-16 20:19:10.894409] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894416] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.990 [2024-05-16 20:19:10.894437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894447] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894453] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.990 [2024-05-16 20:19:10.894464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.990 [2024-05-16 20:19:10.894491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.990 [2024-05-16 20:19:10.894607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.990 [2024-05-16 20:19:10.894621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.990 [2024-05-16 20:19:10.894628] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.990 [2024-05-16 20:19:10.894646] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:23.990 [2024-05-16 20:19:10.894655] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:23.990 [2024-05-16 20:19:10.894671] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.990 [2024-05-16 20:19:10.894697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.990 [2024-05-16 20:19:10.894718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.990 [2024-05-16 20:19:10.894795] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.990 [2024-05-16 20:19:10.894809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.990 [2024-05-16 20:19:10.894816] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.990 [2024-05-16 20:19:10.894841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894851] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.894868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.990 [2024-05-16 20:19:10.894879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.990 [2024-05-16 20:19:10.894899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.990 [2024-05-16 20:19:10.894975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.990 [2024-05-16 20:19:10.894989] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.990 [2024-05-16 20:19:10.894996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.895002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.990 [2024-05-16 20:19:10.895019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.895029] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.895035] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.990 [2024-05-16 20:19:10.895046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.990 [2024-05-16 20:19:10.895066] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.990 [2024-05-16 20:19:10.895139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.990 [2024-05-16 20:19:10.895152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.990 [2024-05-16 20:19:10.895163] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.895170] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.990 [2024-05-16 20:19:10.895187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.895197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.990 [2024-05-16 20:19:10.895204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.990 [2024-05-16 20:19:10.895215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.895235] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.991 [2024-05-16 20:19:10.895304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.895316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.895323] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.991 [2024-05-16 20:19:10.895347] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.991 [2024-05-16 20:19:10.895374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.895394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.991 [2024-05-16 20:19:10.895477] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.895490] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.895498] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.991 [2024-05-16 20:19:10.895522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.991 [2024-05-16 20:19:10.895549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.895569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.991 [2024-05-16 20:19:10.895648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.895662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.895669] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.991 [2024-05-16 20:19:10.895693] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.991 [2024-05-16 20:19:10.895720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.895740] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.991 [2024-05-16 20:19:10.895814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.895828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.895835] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.895847] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.991 [2024-05-16 20:19:10.899877] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.899890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.899897] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da6540) 00:20:23.991 [2024-05-16 20:19:10.899908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.899931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e067c0, cid 3, qid 0 00:20:23.991 [2024-05-16 20:19:10.900020] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.900034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.900041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.900048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e067c0) on tqpair=0x1da6540 00:20:23.991 [2024-05-16 20:19:10.900062] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:20:23.991 00:20:23.991 20:19:10 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:23.991 [2024-05-16 20:19:10.931165] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:23.991 [2024-05-16 20:19:10.931216] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid269073 ] 00:20:23.991 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.991 [2024-05-16 20:19:10.964680] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:23.991 [2024-05-16 20:19:10.964727] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.991 [2024-05-16 20:19:10.964737] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.991 [2024-05-16 20:19:10.964753] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.991 [2024-05-16 20:19:10.964764] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.991 [2024-05-16 20:19:10.964965] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:23.991 [2024-05-16 20:19:10.965007] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2393540 0 00:20:23.991 [2024-05-16 20:19:10.971879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.991 [2024-05-16 20:19:10.971897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.991 [2024-05-16 20:19:10.971904] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.991 [2024-05-16 20:19:10.971911] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.991 [2024-05-16 20:19:10.971948] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.971960] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.971966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.991 [2024-05-16 20:19:10.971979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.991 [2024-05-16 20:19:10.972004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.991 [2024-05-16 20:19:10.977864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.977883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.977891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.977898] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.991 [2024-05-16 20:19:10.977913] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.991 [2024-05-16 20:19:10.977924] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:23.991 [2024-05-16 20:19:10.977934] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:23.991 [2024-05-16 20:19:10.977952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.977961] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.977967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.991 [2024-05-16 20:19:10.977979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.978002] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.991 [2024-05-16 20:19:10.978102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.978116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.978123] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978130] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.991 [2024-05-16 20:19:10.978139] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:23.991 [2024-05-16 20:19:10.978153] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:23.991 [2024-05-16 20:19:10.978165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.991 [2024-05-16 20:19:10.978190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.978211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.991 [2024-05-16 20:19:10.978285] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.978299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.978306] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.991 [2024-05-16 20:19:10.978322] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:23.991 [2024-05-16 20:19:10.978336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.991 [2024-05-16 20:19:10.978349] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978356] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.991 [2024-05-16 20:19:10.978373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.991 [2024-05-16 20:19:10.978394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.991 [2024-05-16 20:19:10.978468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.991 [2024-05-16 20:19:10.978484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.991 [2024-05-16 20:19:10.978491] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978498] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.991 [2024-05-16 20:19:10.978508] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.991 [2024-05-16 20:19:10.978525] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.991 [2024-05-16 20:19:10.978534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.978541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:10.978551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.992 [2024-05-16 20:19:10.978572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.992 [2024-05-16 20:19:10.978645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.992 [2024-05-16 20:19:10.978659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.992 [2024-05-16 20:19:10.978665] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.978672] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.992 [2024-05-16 20:19:10.978681] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.992 [2024-05-16 20:19:10.978689] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.992 [2024-05-16 20:19:10.978703] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.992 [2024-05-16 20:19:10.978812] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:23.992 [2024-05-16 20:19:10.978819] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.992 [2024-05-16 20:19:10.978831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.982860] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.982872] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:10.982883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.992 [2024-05-16 20:19:10.982906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.992 [2024-05-16 20:19:10.983018] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.992 [2024-05-16 20:19:10.983032] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.992 [2024-05-16 20:19:10.983039] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983045] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.992 [2024-05-16 20:19:10.983055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.992 [2024-05-16 20:19:10.983072] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983081] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983088] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:10.983099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.992 [2024-05-16 20:19:10.983120] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.992 [2024-05-16 20:19:10.983196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.992 [2024-05-16 20:19:10.983210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.992 [2024-05-16 20:19:10.983217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.992 [2024-05-16 20:19:10.983233] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.992 [2024-05-16 20:19:10.983241] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:10.983255] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:23.992 [2024-05-16 20:19:10.983273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:10.983287] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983295] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:10.983306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.992 [2024-05-16 20:19:10.983328] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.992 [2024-05-16 20:19:10.983457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.992 [2024-05-16 20:19:10.983472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.992 [2024-05-16 20:19:10.983479] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983485] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=4096, cccid=0 00:20:23.992 [2024-05-16 20:19:10.983492] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f33a0) on tqpair(0x2393540): expected_datao=0, payload_size=4096 00:20:23.992 [2024-05-16 20:19:10.983500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983517] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:10.983526] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.023935] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.992 [2024-05-16 20:19:11.023953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.992 [2024-05-16 20:19:11.023961] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.023968] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.992 [2024-05-16 20:19:11.023981] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:23.992 [2024-05-16 20:19:11.023990] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:23.992 [2024-05-16 20:19:11.023998] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:23.992 [2024-05-16 20:19:11.024009] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:23.992 [2024-05-16 20:19:11.024018] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:23.992 [2024-05-16 20:19:11.024026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:11.024041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:11.024053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024071] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:11.024083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.992 [2024-05-16 20:19:11.024106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.992 [2024-05-16 20:19:11.024181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.992 [2024-05-16 20:19:11.024195] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.992 [2024-05-16 20:19:11.024202] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f33a0) on tqpair=0x2393540 00:20:23.992 [2024-05-16 20:19:11.024221] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:11.024246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.992 [2024-05-16 20:19:11.024256] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024269] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:11.024278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.992 [2024-05-16 20:19:11.024288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:11.024310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.992 [2024-05-16 20:19:11.024319] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:11.024341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.992 [2024-05-16 20:19:11.024350] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:11.024368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:11.024397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024404] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:23.992 [2024-05-16 20:19:11.024414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.992 [2024-05-16 20:19:11.024436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f33a0, cid 0, qid 0 00:20:23.992 [2024-05-16 20:19:11.024463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3500, cid 1, qid 0 00:20:23.992 [2024-05-16 20:19:11.024471] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3660, cid 2, qid 0 00:20:23.992 [2024-05-16 20:19:11.024479] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:23.992 [2024-05-16 20:19:11.024486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:23.992 [2024-05-16 20:19:11.024588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.992 [2024-05-16 20:19:11.024603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.992 [2024-05-16 20:19:11.024610] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.992 [2024-05-16 20:19:11.024617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:23.992 [2024-05-16 20:19:11.024626] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:23.992 [2024-05-16 20:19:11.024635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:23.992 [2024-05-16 20:19:11.024649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.024661] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.024672] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.024679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.024686] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:23.993 [2024-05-16 20:19:11.024696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.993 [2024-05-16 20:19:11.024717] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:23.993 [2024-05-16 20:19:11.024794] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.993 [2024-05-16 20:19:11.024806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.993 [2024-05-16 20:19:11.024813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.024820] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:23.993 [2024-05-16 20:19:11.024884] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.024906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.024921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.024929] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:23.993 [2024-05-16 20:19:11.024940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.993 [2024-05-16 20:19:11.024961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:23.993 [2024-05-16 20:19:11.025057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.993 [2024-05-16 20:19:11.025071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.993 [2024-05-16 20:19:11.025078] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.025085] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=4096, cccid=4 00:20:23.993 [2024-05-16 20:19:11.025092] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3920) on tqpair(0x2393540): expected_datao=0, payload_size=4096 00:20:23.993 [2024-05-16 20:19:11.025099] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.025116] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.025125] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.069876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.993 [2024-05-16 20:19:11.069895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.993 [2024-05-16 20:19:11.069902] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.069913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:23.993 [2024-05-16 20:19:11.069932] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:23.993 [2024-05-16 20:19:11.069951] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.069969] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.069983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.069991] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:23.993 [2024-05-16 20:19:11.070002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.993 [2024-05-16 20:19:11.070025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:23.993 [2024-05-16 20:19:11.070140] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.993 [2024-05-16 20:19:11.070154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.993 [2024-05-16 20:19:11.070161] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.070168] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=4096, cccid=4 00:20:23.993 [2024-05-16 20:19:11.070176] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3920) on tqpair(0x2393540): expected_datao=0, payload_size=4096 00:20:23.993 [2024-05-16 20:19:11.070183] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.070200] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.070209] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.110952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.993 [2024-05-16 20:19:11.110971] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.993 [2024-05-16 20:19:11.110978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.110985] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:23.993 [2024-05-16 20:19:11.111009] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.111029] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:23.993 [2024-05-16 20:19:11.111044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.111052] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:23.993 [2024-05-16 20:19:11.111064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.993 [2024-05-16 20:19:11.111087] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:23.993 [2024-05-16 20:19:11.111183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.993 [2024-05-16 20:19:11.111197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.993 [2024-05-16 20:19:11.111204] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.111211] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=4096, cccid=4 00:20:23.993 [2024-05-16 20:19:11.111218] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3920) on tqpair(0x2393540): expected_datao=0, payload_size=4096 00:20:23.993 [2024-05-16 20:19:11.111226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.111243] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.993 [2024-05-16 20:19:11.111256] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.151956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.255 [2024-05-16 20:19:11.151977] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.255 [2024-05-16 20:19:11.151985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.151993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:24.255 [2024-05-16 20:19:11.152009] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:24.255 [2024-05-16 20:19:11.152026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:24.255 [2024-05-16 20:19:11.152042] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:24.255 [2024-05-16 20:19:11.152053] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:24.255 [2024-05-16 20:19:11.152062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:24.255 [2024-05-16 20:19:11.152072] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:24.255 [2024-05-16 20:19:11.152080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:24.255 [2024-05-16 20:19:11.152089] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:24.255 [2024-05-16 20:19:11.152111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:24.255 [2024-05-16 20:19:11.152133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.255 [2024-05-16 20:19:11.152152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152160] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152167] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2393540) 00:20:24.255 [2024-05-16 20:19:11.152176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.255 [2024-05-16 20:19:11.152203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:24.255 [2024-05-16 20:19:11.152215] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3a80, cid 5, qid 0 00:20:24.255 [2024-05-16 20:19:11.152341] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.255 [2024-05-16 20:19:11.152354] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.255 [2024-05-16 20:19:11.152361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152368] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:24.255 [2024-05-16 20:19:11.152380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.255 [2024-05-16 20:19:11.152389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.255 [2024-05-16 20:19:11.152396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3a80) on tqpair=0x2393540 00:20:24.255 [2024-05-16 20:19:11.152420] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2393540) 00:20:24.255 [2024-05-16 20:19:11.152440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.255 [2024-05-16 20:19:11.152465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3a80, cid 5, qid 0 00:20:24.255 [2024-05-16 20:19:11.152576] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.255 [2024-05-16 20:19:11.152590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.255 [2024-05-16 20:19:11.152597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3a80) on tqpair=0x2393540 00:20:24.255 [2024-05-16 20:19:11.152622] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2393540) 00:20:24.255 [2024-05-16 20:19:11.152642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.255 [2024-05-16 20:19:11.152663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3a80, cid 5, qid 0 00:20:24.255 [2024-05-16 20:19:11.152758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.255 [2024-05-16 20:19:11.152770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.255 [2024-05-16 20:19:11.152777] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3a80) on tqpair=0x2393540 00:20:24.255 [2024-05-16 20:19:11.152800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2393540) 00:20:24.255 [2024-05-16 20:19:11.152820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.255 [2024-05-16 20:19:11.152848] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3a80, cid 5, qid 0 00:20:24.255 [2024-05-16 20:19:11.152934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.255 [2024-05-16 20:19:11.152947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.255 [2024-05-16 20:19:11.152954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.255 [2024-05-16 20:19:11.152961] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3a80) on tqpair=0x2393540 00:20:24.255 [2024-05-16 20:19:11.152982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.152993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2393540) 00:20:24.256 [2024-05-16 20:19:11.153004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.256 [2024-05-16 20:19:11.153016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153025] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2393540) 00:20:24.256 [2024-05-16 20:19:11.153034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.256 [2024-05-16 20:19:11.153046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2393540) 00:20:24.256 [2024-05-16 20:19:11.153063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.256 [2024-05-16 20:19:11.153076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2393540) 00:20:24.256 [2024-05-16 20:19:11.153093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.256 [2024-05-16 20:19:11.153119] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3a80, cid 5, qid 0 00:20:24.256 [2024-05-16 20:19:11.153131] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3920, cid 4, qid 0 00:20:24.256 [2024-05-16 20:19:11.153139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3be0, cid 6, qid 0 00:20:24.256 [2024-05-16 20:19:11.153147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3d40, cid 7, qid 0 00:20:24.256 [2024-05-16 20:19:11.153301] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.256 [2024-05-16 20:19:11.153316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.256 [2024-05-16 20:19:11.153323] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153330] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=8192, cccid=5 00:20:24.256 [2024-05-16 20:19:11.153338] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3a80) on tqpair(0x2393540): expected_datao=0, payload_size=8192 00:20:24.256 [2024-05-16 20:19:11.153345] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153364] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153373] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153386] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.256 [2024-05-16 20:19:11.153396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.256 [2024-05-16 20:19:11.153403] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153410] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=512, cccid=4 00:20:24.256 [2024-05-16 20:19:11.153417] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3920) on tqpair(0x2393540): expected_datao=0, payload_size=512 00:20:24.256 [2024-05-16 20:19:11.153425] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153434] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153442] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.256 [2024-05-16 20:19:11.153459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.256 [2024-05-16 20:19:11.153466] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153472] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=512, cccid=6 00:20:24.256 [2024-05-16 20:19:11.153480] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3be0) on tqpair(0x2393540): expected_datao=0, payload_size=512 00:20:24.256 [2024-05-16 20:19:11.153487] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153496] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153503] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.256 [2024-05-16 20:19:11.153521] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.256 [2024-05-16 20:19:11.153527] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153534] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2393540): datao=0, datal=4096, cccid=7 00:20:24.256 [2024-05-16 20:19:11.153541] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f3d40) on tqpair(0x2393540): expected_datao=0, payload_size=4096 00:20:24.256 [2024-05-16 20:19:11.153549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153558] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153580] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.256 [2024-05-16 20:19:11.153603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.256 [2024-05-16 20:19:11.153612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153620] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3a80) on tqpair=0x2393540 00:20:24.256 [2024-05-16 20:19:11.153639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.256 [2024-05-16 20:19:11.153651] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.256 [2024-05-16 20:19:11.153657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153664] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3920) on tqpair=0x2393540 00:20:24.256 [2024-05-16 20:19:11.153679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.256 [2024-05-16 20:19:11.153689] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.256 [2024-05-16 20:19:11.153696] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153702] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3be0) on tqpair=0x2393540 00:20:24.256 [2024-05-16 20:19:11.153717] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.256 [2024-05-16 20:19:11.153727] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.256 [2024-05-16 20:19:11.153734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.256 [2024-05-16 20:19:11.153740] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3d40) on tqpair=0x2393540 00:20:24.256 ===================================================== 00:20:24.256 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.256 ===================================================== 00:20:24.256 Controller Capabilities/Features 00:20:24.256 ================================ 00:20:24.256 Vendor ID: 8086 00:20:24.256 Subsystem Vendor ID: 8086 00:20:24.256 Serial Number: SPDK00000000000001 00:20:24.256 Model Number: SPDK bdev Controller 00:20:24.256 Firmware Version: 24.09 00:20:24.256 Recommended Arb Burst: 6 00:20:24.256 IEEE OUI Identifier: e4 d2 5c 00:20:24.256 Multi-path I/O 00:20:24.256 May have multiple subsystem ports: Yes 00:20:24.256 May have multiple controllers: Yes 00:20:24.256 Associated with SR-IOV VF: No 00:20:24.256 Max Data Transfer Size: 131072 00:20:24.256 Max Number of Namespaces: 32 00:20:24.256 Max Number of I/O Queues: 127 00:20:24.256 NVMe Specification Version (VS): 1.3 00:20:24.256 NVMe Specification Version (Identify): 1.3 00:20:24.256 Maximum Queue Entries: 128 00:20:24.256 Contiguous Queues Required: Yes 00:20:24.256 Arbitration Mechanisms Supported 00:20:24.256 Weighted Round Robin: Not Supported 00:20:24.256 Vendor Specific: Not Supported 00:20:24.256 Reset Timeout: 15000 ms 00:20:24.256 Doorbell Stride: 4 bytes 00:20:24.256 NVM Subsystem Reset: Not Supported 00:20:24.256 Command Sets Supported 00:20:24.256 NVM Command Set: Supported 00:20:24.256 Boot Partition: Not Supported 00:20:24.256 Memory Page Size Minimum: 4096 bytes 00:20:24.256 Memory Page Size Maximum: 4096 bytes 00:20:24.256 Persistent Memory Region: Not Supported 00:20:24.256 Optional Asynchronous Events Supported 00:20:24.256 Namespace Attribute Notices: Supported 00:20:24.256 Firmware Activation Notices: Not Supported 00:20:24.256 ANA Change Notices: Not Supported 00:20:24.256 PLE Aggregate Log Change Notices: Not Supported 00:20:24.256 LBA Status Info Alert Notices: Not Supported 00:20:24.256 EGE Aggregate Log Change Notices: Not Supported 00:20:24.256 Normal NVM Subsystem Shutdown event: Not Supported 00:20:24.256 Zone Descriptor Change Notices: Not Supported 00:20:24.256 Discovery Log Change Notices: Not Supported 00:20:24.256 Controller Attributes 00:20:24.256 128-bit Host Identifier: Supported 00:20:24.256 Non-Operational Permissive Mode: Not Supported 00:20:24.256 NVM Sets: Not Supported 00:20:24.256 Read Recovery Levels: Not Supported 00:20:24.256 Endurance Groups: Not Supported 00:20:24.256 Predictable Latency Mode: Not Supported 00:20:24.256 Traffic Based Keep ALive: Not Supported 00:20:24.256 Namespace Granularity: Not Supported 00:20:24.256 SQ Associations: Not Supported 00:20:24.256 UUID List: Not Supported 00:20:24.256 Multi-Domain Subsystem: Not Supported 00:20:24.256 Fixed Capacity Management: Not Supported 00:20:24.256 Variable Capacity Management: Not Supported 00:20:24.256 Delete Endurance Group: Not Supported 00:20:24.256 Delete NVM Set: Not Supported 00:20:24.256 Extended LBA Formats Supported: Not Supported 00:20:24.256 Flexible Data Placement Supported: Not Supported 00:20:24.256 00:20:24.256 Controller Memory Buffer Support 00:20:24.256 ================================ 00:20:24.256 Supported: No 00:20:24.256 00:20:24.256 Persistent Memory Region Support 00:20:24.256 ================================ 00:20:24.256 Supported: No 00:20:24.256 00:20:24.256 Admin Command Set Attributes 00:20:24.256 ============================ 00:20:24.256 Security Send/Receive: Not Supported 00:20:24.256 Format NVM: Not Supported 00:20:24.256 Firmware Activate/Download: Not Supported 00:20:24.256 Namespace Management: Not Supported 00:20:24.256 Device Self-Test: Not Supported 00:20:24.256 Directives: Not Supported 00:20:24.256 NVMe-MI: Not Supported 00:20:24.256 Virtualization Management: Not Supported 00:20:24.257 Doorbell Buffer Config: Not Supported 00:20:24.257 Get LBA Status Capability: Not Supported 00:20:24.257 Command & Feature Lockdown Capability: Not Supported 00:20:24.257 Abort Command Limit: 4 00:20:24.257 Async Event Request Limit: 4 00:20:24.257 Number of Firmware Slots: N/A 00:20:24.257 Firmware Slot 1 Read-Only: N/A 00:20:24.257 Firmware Activation Without Reset: N/A 00:20:24.257 Multiple Update Detection Support: N/A 00:20:24.257 Firmware Update Granularity: No Information Provided 00:20:24.257 Per-Namespace SMART Log: No 00:20:24.257 Asymmetric Namespace Access Log Page: Not Supported 00:20:24.257 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:24.257 Command Effects Log Page: Supported 00:20:24.257 Get Log Page Extended Data: Supported 00:20:24.257 Telemetry Log Pages: Not Supported 00:20:24.257 Persistent Event Log Pages: Not Supported 00:20:24.257 Supported Log Pages Log Page: May Support 00:20:24.257 Commands Supported & Effects Log Page: Not Supported 00:20:24.257 Feature Identifiers & Effects Log Page:May Support 00:20:24.257 NVMe-MI Commands & Effects Log Page: May Support 00:20:24.257 Data Area 4 for Telemetry Log: Not Supported 00:20:24.257 Error Log Page Entries Supported: 128 00:20:24.257 Keep Alive: Supported 00:20:24.257 Keep Alive Granularity: 10000 ms 00:20:24.257 00:20:24.257 NVM Command Set Attributes 00:20:24.257 ========================== 00:20:24.257 Submission Queue Entry Size 00:20:24.257 Max: 64 00:20:24.257 Min: 64 00:20:24.257 Completion Queue Entry Size 00:20:24.257 Max: 16 00:20:24.257 Min: 16 00:20:24.257 Number of Namespaces: 32 00:20:24.257 Compare Command: Supported 00:20:24.257 Write Uncorrectable Command: Not Supported 00:20:24.257 Dataset Management Command: Supported 00:20:24.257 Write Zeroes Command: Supported 00:20:24.257 Set Features Save Field: Not Supported 00:20:24.257 Reservations: Supported 00:20:24.257 Timestamp: Not Supported 00:20:24.257 Copy: Supported 00:20:24.257 Volatile Write Cache: Present 00:20:24.257 Atomic Write Unit (Normal): 1 00:20:24.257 Atomic Write Unit (PFail): 1 00:20:24.257 Atomic Compare & Write Unit: 1 00:20:24.257 Fused Compare & Write: Supported 00:20:24.257 Scatter-Gather List 00:20:24.257 SGL Command Set: Supported 00:20:24.257 SGL Keyed: Supported 00:20:24.257 SGL Bit Bucket Descriptor: Not Supported 00:20:24.257 SGL Metadata Pointer: Not Supported 00:20:24.257 Oversized SGL: Not Supported 00:20:24.257 SGL Metadata Address: Not Supported 00:20:24.257 SGL Offset: Supported 00:20:24.257 Transport SGL Data Block: Not Supported 00:20:24.257 Replay Protected Memory Block: Not Supported 00:20:24.257 00:20:24.257 Firmware Slot Information 00:20:24.257 ========================= 00:20:24.257 Active slot: 1 00:20:24.257 Slot 1 Firmware Revision: 24.09 00:20:24.257 00:20:24.257 00:20:24.257 Commands Supported and Effects 00:20:24.257 ============================== 00:20:24.257 Admin Commands 00:20:24.257 -------------- 00:20:24.257 Get Log Page (02h): Supported 00:20:24.257 Identify (06h): Supported 00:20:24.257 Abort (08h): Supported 00:20:24.257 Set Features (09h): Supported 00:20:24.257 Get Features (0Ah): Supported 00:20:24.257 Asynchronous Event Request (0Ch): Supported 00:20:24.257 Keep Alive (18h): Supported 00:20:24.257 I/O Commands 00:20:24.257 ------------ 00:20:24.257 Flush (00h): Supported LBA-Change 00:20:24.257 Write (01h): Supported LBA-Change 00:20:24.257 Read (02h): Supported 00:20:24.257 Compare (05h): Supported 00:20:24.257 Write Zeroes (08h): Supported LBA-Change 00:20:24.257 Dataset Management (09h): Supported LBA-Change 00:20:24.257 Copy (19h): Supported LBA-Change 00:20:24.257 Unknown (79h): Supported LBA-Change 00:20:24.257 Unknown (7Ah): Supported 00:20:24.257 00:20:24.257 Error Log 00:20:24.257 ========= 00:20:24.257 00:20:24.257 Arbitration 00:20:24.257 =========== 00:20:24.257 Arbitration Burst: 1 00:20:24.257 00:20:24.257 Power Management 00:20:24.257 ================ 00:20:24.257 Number of Power States: 1 00:20:24.257 Current Power State: Power State #0 00:20:24.257 Power State #0: 00:20:24.257 Max Power: 0.00 W 00:20:24.257 Non-Operational State: Operational 00:20:24.257 Entry Latency: Not Reported 00:20:24.257 Exit Latency: Not Reported 00:20:24.257 Relative Read Throughput: 0 00:20:24.257 Relative Read Latency: 0 00:20:24.257 Relative Write Throughput: 0 00:20:24.257 Relative Write Latency: 0 00:20:24.257 Idle Power: Not Reported 00:20:24.257 Active Power: Not Reported 00:20:24.257 Non-Operational Permissive Mode: Not Supported 00:20:24.257 00:20:24.257 Health Information 00:20:24.257 ================== 00:20:24.257 Critical Warnings: 00:20:24.257 Available Spare Space: OK 00:20:24.257 Temperature: OK 00:20:24.257 Device Reliability: OK 00:20:24.257 Read Only: No 00:20:24.257 Volatile Memory Backup: OK 00:20:24.257 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:24.257 Temperature Threshold: [2024-05-16 20:19:11.155913] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.155929] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2393540) 00:20:24.257 [2024-05-16 20:19:11.155941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.257 [2024-05-16 20:19:11.155965] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f3d40, cid 7, qid 0 00:20:24.257 [2024-05-16 20:19:11.156099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.257 [2024-05-16 20:19:11.156112] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.257 [2024-05-16 20:19:11.156119] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f3d40) on tqpair=0x2393540 00:20:24.257 [2024-05-16 20:19:11.156165] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:24.257 [2024-05-16 20:19:11.156187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.257 [2024-05-16 20:19:11.156199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.257 [2024-05-16 20:19:11.156210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.257 [2024-05-16 20:19:11.156219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.257 [2024-05-16 20:19:11.156233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.257 [2024-05-16 20:19:11.156273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.257 [2024-05-16 20:19:11.156296] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.257 [2024-05-16 20:19:11.156421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.257 [2024-05-16 20:19:11.156434] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.257 [2024-05-16 20:19:11.156441] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156447] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.257 [2024-05-16 20:19:11.156464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.257 [2024-05-16 20:19:11.156491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.257 [2024-05-16 20:19:11.156517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.257 [2024-05-16 20:19:11.156612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.257 [2024-05-16 20:19:11.156626] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.257 [2024-05-16 20:19:11.156633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156640] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.257 [2024-05-16 20:19:11.156649] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:24.257 [2024-05-16 20:19:11.156657] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:24.257 [2024-05-16 20:19:11.156673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.257 [2024-05-16 20:19:11.156700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.257 [2024-05-16 20:19:11.156720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.257 [2024-05-16 20:19:11.156821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.257 [2024-05-16 20:19:11.156835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.257 [2024-05-16 20:19:11.156859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156869] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.257 [2024-05-16 20:19:11.156889] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156899] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.257 [2024-05-16 20:19:11.156906] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.257 [2024-05-16 20:19:11.156917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.257 [2024-05-16 20:19:11.156938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.157023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.157037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.157044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157051] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.157068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.157094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.157115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.157239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.157252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.157269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157276] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.157294] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157310] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.157321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.157341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.157413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.157427] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.157434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.157458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.157485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.157506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.157591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.157603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.157610] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.157634] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.157661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.157681] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.157759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.157772] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.157780] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157786] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.157804] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157813] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.157830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.157851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.157959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.157972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.157983] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.157990] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.158008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158018] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158025] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.158035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.158056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.158132] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.158156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.158163] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158170] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.158187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.158215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.158235] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.158335] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.158349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.158356] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158363] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.158379] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.158406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.158427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.158505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.158518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.158526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158532] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.158549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158565] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.158576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.158596] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.158672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.158684] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.158691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.158720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158736] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.158747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.158767] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.158849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.158872] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.158880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158887] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.158906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.158923] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.158934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.158955] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.159049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.159063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.159070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.159076] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.258 [2024-05-16 20:19:11.159094] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.159104] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.258 [2024-05-16 20:19:11.159111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.258 [2024-05-16 20:19:11.159122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.258 [2024-05-16 20:19:11.159142] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.258 [2024-05-16 20:19:11.159249] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.258 [2024-05-16 20:19:11.159262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.258 [2024-05-16 20:19:11.159270] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159276] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.259 [2024-05-16 20:19:11.159293] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159303] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.259 [2024-05-16 20:19:11.159320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.259 [2024-05-16 20:19:11.159340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.259 [2024-05-16 20:19:11.159418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.259 [2024-05-16 20:19:11.159432] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.259 [2024-05-16 20:19:11.159439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.259 [2024-05-16 20:19:11.159468] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159478] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159485] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.259 [2024-05-16 20:19:11.159496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.259 [2024-05-16 20:19:11.159517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.259 [2024-05-16 20:19:11.159591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.259 [2024-05-16 20:19:11.159605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.259 [2024-05-16 20:19:11.159612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.259 [2024-05-16 20:19:11.159636] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159646] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159652] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.259 [2024-05-16 20:19:11.159663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.259 [2024-05-16 20:19:11.159684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.259 [2024-05-16 20:19:11.159768] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.259 [2024-05-16 20:19:11.159781] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.259 [2024-05-16 20:19:11.159788] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159794] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.259 [2024-05-16 20:19:11.159812] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159822] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.159828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.259 [2024-05-16 20:19:11.159839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.259 [2024-05-16 20:19:11.165874] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.259 [2024-05-16 20:19:11.165896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.259 [2024-05-16 20:19:11.165907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.259 [2024-05-16 20:19:11.165913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.165920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.259 [2024-05-16 20:19:11.165938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.165948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.165954] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2393540) 00:20:24.259 [2024-05-16 20:19:11.165965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.259 [2024-05-16 20:19:11.165986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f37c0, cid 3, qid 0 00:20:24.259 [2024-05-16 20:19:11.166132] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.259 [2024-05-16 20:19:11.166146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.259 [2024-05-16 20:19:11.166152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.259 [2024-05-16 20:19:11.166159] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f37c0) on tqpair=0x2393540 00:20:24.259 [2024-05-16 20:19:11.166173] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 9 milliseconds 00:20:24.259 0 Kelvin (-273 Celsius) 00:20:24.259 Available Spare: 0% 00:20:24.259 Available Spare Threshold: 0% 00:20:24.259 Life Percentage Used: 0% 00:20:24.259 Data Units Read: 0 00:20:24.259 Data Units Written: 0 00:20:24.259 Host Read Commands: 0 00:20:24.259 Host Write Commands: 0 00:20:24.259 Controller Busy Time: 0 minutes 00:20:24.259 Power Cycles: 0 00:20:24.259 Power On Hours: 0 hours 00:20:24.259 Unsafe Shutdowns: 0 00:20:24.259 Unrecoverable Media Errors: 0 00:20:24.259 Lifetime Error Log Entries: 0 00:20:24.259 Warning Temperature Time: 0 minutes 00:20:24.259 Critical Temperature Time: 0 minutes 00:20:24.259 00:20:24.259 Number of Queues 00:20:24.259 ================ 00:20:24.259 Number of I/O Submission Queues: 127 00:20:24.259 Number of I/O Completion Queues: 127 00:20:24.259 00:20:24.259 Active Namespaces 00:20:24.259 ================= 00:20:24.259 Namespace ID:1 00:20:24.259 Error Recovery Timeout: Unlimited 00:20:24.259 Command Set Identifier: NVM (00h) 00:20:24.259 Deallocate: Supported 00:20:24.259 Deallocated/Unwritten Error: Not Supported 00:20:24.259 Deallocated Read Value: Unknown 00:20:24.259 Deallocate in Write Zeroes: Not Supported 00:20:24.259 Deallocated Guard Field: 0xFFFF 00:20:24.259 Flush: Supported 00:20:24.259 Reservation: Supported 00:20:24.259 Namespace Sharing Capabilities: Multiple Controllers 00:20:24.259 Size (in LBAs): 131072 (0GiB) 00:20:24.259 Capacity (in LBAs): 131072 (0GiB) 00:20:24.259 Utilization (in LBAs): 131072 (0GiB) 00:20:24.259 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:24.259 EUI64: ABCDEF0123456789 00:20:24.259 UUID: 4f27d425-5ff5-405b-9600-1b4475780a0b 00:20:24.259 Thin Provisioning: Not Supported 00:20:24.259 Per-NS Atomic Units: Yes 00:20:24.259 Atomic Boundary Size (Normal): 0 00:20:24.259 Atomic Boundary Size (PFail): 0 00:20:24.259 Atomic Boundary Offset: 0 00:20:24.259 Maximum Single Source Range Length: 65535 00:20:24.259 Maximum Copy Length: 65535 00:20:24.259 Maximum Source Range Count: 1 00:20:24.259 NGUID/EUI64 Never Reused: No 00:20:24.259 Namespace Write Protected: No 00:20:24.259 Number of LBA Formats: 1 00:20:24.259 Current LBA Format: LBA Format #00 00:20:24.259 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:24.259 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.259 rmmod nvme_tcp 00:20:24.259 rmmod nvme_fabrics 00:20:24.259 rmmod nvme_keyring 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 268897 ']' 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 268897 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 268897 ']' 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 268897 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 268897 00:20:24.259 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:24.260 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:24.260 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 268897' 00:20:24.260 killing process with pid 268897 00:20:24.260 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 268897 00:20:24.260 [2024-05-16 20:19:11.271090] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:24.260 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 268897 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.518 20:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.048 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.048 00:20:27.048 real 0m6.251s 00:20:27.048 user 0m7.864s 00:20:27.048 sys 0m1.943s 00:20:27.048 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:27.048 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.048 ************************************ 00:20:27.048 END TEST nvmf_identify 00:20:27.048 ************************************ 00:20:27.048 20:19:13 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:27.048 20:19:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:27.048 20:19:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:27.048 20:19:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.048 ************************************ 00:20:27.048 START TEST nvmf_perf 00:20:27.048 ************************************ 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:27.048 * Looking for test storage... 00:20:27.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.048 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.049 20:19:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.946 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:28.947 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:28.947 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:28.947 Found net devices under 0000:09:00.0: cvl_0_0 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:28.947 Found net devices under 0000:09:00.1: cvl_0_1 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:20:28.947 00:20:28.947 --- 10.0.0.2 ping statistics --- 00:20:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.947 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:20:28.947 00:20:28.947 --- 10.0.0.1 ping statistics --- 00:20:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.947 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=271007 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 271007 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 271007 ']' 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:28.947 20:19:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.947 [2024-05-16 20:19:15.934956] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:28.947 [2024-05-16 20:19:15.935037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.947 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.947 [2024-05-16 20:19:16.002615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.204 [2024-05-16 20:19:16.120284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.204 [2024-05-16 20:19:16.120336] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.204 [2024-05-16 20:19:16.120350] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.204 [2024-05-16 20:19:16.120368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.204 [2024-05-16 20:19:16.120378] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.204 [2024-05-16 20:19:16.120464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.204 [2024-05-16 20:19:16.120530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.204 [2024-05-16 20:19:16.123887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.204 [2024-05-16 20:19:16.123893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:29.204 20:19:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:32.478 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:32.478 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:32.478 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:20:32.478 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:33.044 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:33.044 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:20:33.044 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:33.044 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:33.044 20:19:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.044 [2024-05-16 20:19:20.136009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.044 20:19:20 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.301 20:19:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:33.302 20:19:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.559 20:19:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:33.559 20:19:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:33.818 20:19:20 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.075 [2024-05-16 20:19:21.127306] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:34.075 [2024-05-16 20:19:21.127593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.075 20:19:21 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:34.333 20:19:21 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:20:34.333 20:19:21 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:34.333 20:19:21 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:34.333 20:19:21 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:35.706 Initializing NVMe Controllers 00:20:35.706 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:20:35.706 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:20:35.706 Initialization complete. Launching workers. 00:20:35.706 ======================================================== 00:20:35.706 Latency(us) 00:20:35.706 Device Information : IOPS MiB/s Average min max 00:20:35.706 PCIE (0000:0b:00.0) NSID 1 from core 0: 85853.45 335.37 372.30 33.86 4444.49 00:20:35.706 ======================================================== 00:20:35.706 Total : 85853.45 335.37 372.30 33.86 4444.49 00:20:35.706 00:20:35.706 20:19:22 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.706 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.086 Initializing NVMe Controllers 00:20:37.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.086 Initialization complete. Launching workers. 00:20:37.086 ======================================================== 00:20:37.086 Latency(us) 00:20:37.086 Device Information : IOPS MiB/s Average min max 00:20:37.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 106.63 0.42 9407.43 153.69 45801.24 00:20:37.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.81 0.21 18808.57 7782.52 47900.77 00:20:37.086 ======================================================== 00:20:37.086 Total : 161.43 0.63 12599.18 153.69 47900.77 00:20:37.086 00:20:37.086 20:19:24 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.086 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.458 Initializing NVMe Controllers 00:20:38.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:38.458 Initialization complete. Launching workers. 00:20:38.458 ======================================================== 00:20:38.458 Latency(us) 00:20:38.458 Device Information : IOPS MiB/s Average min max 00:20:38.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8409.40 32.85 3804.52 831.65 7912.85 00:20:38.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3867.15 15.11 8302.78 6069.56 15923.00 00:20:38.458 ======================================================== 00:20:38.458 Total : 12276.55 47.96 5221.49 831.65 15923.00 00:20:38.458 00:20:38.458 20:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:38.458 20:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:38.458 20:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.458 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.982 Initializing NVMe Controllers 00:20:40.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.982 Controller IO queue size 128, less than required. 00:20:40.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.982 Controller IO queue size 128, less than required. 00:20:40.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.982 Initialization complete. Launching workers. 00:20:40.982 ======================================================== 00:20:40.982 Latency(us) 00:20:40.982 Device Information : IOPS MiB/s Average min max 00:20:40.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1585.91 396.48 82273.15 57310.14 135798.63 00:20:40.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.10 144.78 225838.12 85620.51 375518.51 00:20:40.982 ======================================================== 00:20:40.982 Total : 2165.01 541.25 120674.21 57310.14 375518.51 00:20:40.982 00:20:40.982 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:40.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.982 No valid NVMe controllers or AIO or URING devices found 00:20:40.982 Initializing NVMe Controllers 00:20:40.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.982 Controller IO queue size 128, less than required. 00:20:40.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.982 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:40.982 Controller IO queue size 128, less than required. 00:20:40.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.982 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:40.982 WARNING: Some requested NVMe devices were skipped 00:20:40.982 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:40.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.511 Initializing NVMe Controllers 00:20:43.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.511 Controller IO queue size 128, less than required. 00:20:43.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.511 Controller IO queue size 128, less than required. 00:20:43.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:43.511 Initialization complete. Launching workers. 00:20:43.511 00:20:43.511 ==================== 00:20:43.511 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:43.511 TCP transport: 00:20:43.511 polls: 8735 00:20:43.511 idle_polls: 5506 00:20:43.511 sock_completions: 3229 00:20:43.511 nvme_completions: 6083 00:20:43.511 submitted_requests: 9064 00:20:43.511 queued_requests: 1 00:20:43.511 00:20:43.511 ==================== 00:20:43.511 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:43.511 TCP transport: 00:20:43.511 polls: 8962 00:20:43.511 idle_polls: 5517 00:20:43.511 sock_completions: 3445 00:20:43.511 nvme_completions: 6327 00:20:43.511 submitted_requests: 9388 00:20:43.511 queued_requests: 1 00:20:43.511 ======================================================== 00:20:43.511 Latency(us) 00:20:43.511 Device Information : IOPS MiB/s Average min max 00:20:43.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.33 380.08 86412.53 47712.41 142882.74 00:20:43.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1581.33 395.33 82108.54 39243.76 120087.27 00:20:43.511 ======================================================== 00:20:43.511 Total : 3101.66 775.42 84218.22 39243.76 142882.74 00:20:43.511 00:20:43.511 20:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:43.511 20:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.768 rmmod nvme_tcp 00:20:43.768 rmmod nvme_fabrics 00:20:43.768 rmmod nvme_keyring 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 271007 ']' 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 271007 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 271007 ']' 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 271007 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 271007 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:43.768 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:43.769 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 271007' 00:20:43.769 killing process with pid 271007 00:20:43.769 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 271007 00:20:43.769 [2024-05-16 20:19:30.847166] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:43.769 20:19:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 271007 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.665 20:19:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.571 20:19:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:47.571 00:20:47.571 real 0m20.755s 00:20:47.571 user 1m4.059s 00:20:47.571 sys 0m5.139s 00:20:47.571 20:19:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:47.571 20:19:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.571 ************************************ 00:20:47.571 END TEST nvmf_perf 00:20:47.571 ************************************ 00:20:47.571 20:19:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:47.571 20:19:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:47.571 20:19:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:47.571 20:19:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:47.571 ************************************ 00:20:47.571 START TEST nvmf_fio_host 00:20:47.571 ************************************ 00:20:47.571 20:19:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:47.571 * Looking for test storage... 00:20:47.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:47.571 20:19:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:47.572 20:19:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:49.474 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:49.474 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:49.474 Found net devices under 0000:09:00.0: cvl_0_0 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:49.474 Found net devices under 0000:09:00.1: cvl_0_1 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.474 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:20:49.475 00:20:49.475 --- 10.0.0.2 ping statistics --- 00:20:49.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.475 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:20:49.475 00:20:49.475 --- 10.0.0.1 ping statistics --- 00:20:49.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.475 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=274839 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 274839 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 274839 ']' 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:49.475 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.475 [2024-05-16 20:19:36.582099] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:49.475 [2024-05-16 20:19:36.582189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.475 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.733 [2024-05-16 20:19:36.647751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.733 [2024-05-16 20:19:36.756483] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.733 [2024-05-16 20:19:36.756543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.733 [2024-05-16 20:19:36.756557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.733 [2024-05-16 20:19:36.756569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.733 [2024-05-16 20:19:36.756579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.733 [2024-05-16 20:19:36.756884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.733 [2024-05-16 20:19:36.756914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.733 [2024-05-16 20:19:36.756976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.733 [2024-05-16 20:19:36.756979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 [2024-05-16 20:19:36.887539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 Malloc1 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 [2024-05-16 20:19:36.968619] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:49.991 [2024-05-16 20:19:36.968946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:20:49.991 20:19:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:49.991 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:49.992 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:49.992 20:19:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:50.249 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:50.249 fio-3.35 00:20:50.249 Starting 1 thread 00:20:50.249 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.775 00:20:52.775 test: (groupid=0, jobs=1): err= 0: pid=275054: Thu May 16 20:19:39 2024 00:20:52.775 read: IOPS=8324, BW=32.5MiB/s (34.1MB/s)(65.3MiB/2007msec) 00:20:52.775 slat (usec): min=2, max=134, avg= 2.69, stdev= 1.79 00:20:52.775 clat (usec): min=2303, max=14248, avg=8394.70, stdev=669.07 00:20:52.775 lat (usec): min=2329, max=14250, avg=8397.39, stdev=668.99 00:20:52.775 clat percentiles (usec): 00:20:52.775 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 7898], 00:20:52.775 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:20:52.775 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9372], 00:20:52.775 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[11338], 99.95th=[13173], 00:20:52.775 | 99.99th=[14222] 00:20:52.775 bw ( KiB/s): min=31792, max=33968, per=99.98%, avg=33292.00, stdev=1011.29, samples=4 00:20:52.775 iops : min= 7948, max= 8492, avg=8323.00, stdev=252.82, samples=4 00:20:52.775 write: IOPS=8331, BW=32.5MiB/s (34.1MB/s)(65.3MiB/2007msec); 0 zone resets 00:20:52.775 slat (usec): min=2, max=117, avg= 2.81, stdev= 1.36 00:20:52.775 clat (usec): min=1582, max=12366, avg=6903.34, stdev=590.57 00:20:52.775 lat (usec): min=1590, max=12369, avg=6906.15, stdev=590.58 00:20:52.775 clat percentiles (usec): 00:20:52.775 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:20:52.775 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:20:52.775 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7832], 00:20:52.775 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[11338], 00:20:52.775 | 99.99th=[12387] 00:20:52.775 bw ( KiB/s): min=32640, max=33728, per=99.96%, avg=33312.00, stdev=486.01, samples=4 00:20:52.775 iops : min= 8160, max= 8432, avg=8328.00, stdev=121.50, samples=4 00:20:52.775 lat (msec) : 2=0.02%, 4=0.08%, 10=99.50%, 20=0.39% 00:20:52.775 cpu : usr=60.02%, sys=38.38%, ctx=96, majf=0, minf=40 00:20:52.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:52.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.775 issued rwts: total=16708,16721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.775 00:20:52.775 Run status group 0 (all jobs): 00:20:52.775 READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=65.3MiB (68.4MB), run=2007-2007msec 00:20:52.775 WRITE: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=65.3MiB (68.5MB), run=2007-2007msec 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:52.775 20:19:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:52.775 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:52.775 fio-3.35 00:20:52.775 Starting 1 thread 00:20:52.775 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.304 00:20:55.304 test: (groupid=0, jobs=1): err= 0: pid=275432: Thu May 16 20:19:42 2024 00:20:55.304 read: IOPS=7449, BW=116MiB/s (122MB/s)(234MiB/2007msec) 00:20:55.304 slat (nsec): min=2801, max=92503, avg=3797.95, stdev=1961.49 00:20:55.304 clat (usec): min=2252, max=19215, avg=9558.25, stdev=2073.99 00:20:55.304 lat (usec): min=2261, max=19218, avg=9562.05, stdev=2074.01 00:20:55.304 clat percentiles (usec): 00:20:55.304 | 1.00th=[ 5211], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7963], 00:20:55.304 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:20:55.304 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11994], 95.00th=[13698], 00:20:55.304 | 99.00th=[15926], 99.50th=[16581], 99.90th=[17171], 99.95th=[18220], 00:20:55.304 | 99.99th=[19006] 00:20:55.304 bw ( KiB/s): min=50720, max=67936, per=50.05%, avg=59656.00, stdev=8501.70, samples=4 00:20:55.304 iops : min= 3170, max= 4246, avg=3728.50, stdev=531.36, samples=4 00:20:55.304 write: IOPS=4374, BW=68.3MiB/s (71.7MB/s)(123MiB/1793msec); 0 zone resets 00:20:55.304 slat (usec): min=30, max=192, avg=34.44, stdev= 6.35 00:20:55.304 clat (usec): min=5915, max=23043, avg=13724.79, stdev=2248.79 00:20:55.304 lat (usec): min=5949, max=23075, avg=13759.23, stdev=2248.75 00:20:55.304 clat percentiles (usec): 00:20:55.304 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11863], 00:20:55.304 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13698], 60.00th=[14222], 00:20:55.304 | 70.00th=[14877], 80.00th=[15533], 90.00th=[16581], 95.00th=[17433], 00:20:55.304 | 99.00th=[19268], 99.50th=[19530], 99.90th=[22676], 99.95th=[22938], 00:20:55.304 | 99.99th=[22938] 00:20:55.304 bw ( KiB/s): min=53760, max=69632, per=89.16%, avg=62400.00, stdev=8047.73, samples=4 00:20:55.304 iops : min= 3360, max= 4352, avg=3900.00, stdev=502.98, samples=4 00:20:55.304 lat (msec) : 4=0.15%, 10=43.25%, 20=56.46%, 50=0.13% 00:20:55.304 cpu : usr=72.25%, sys=26.46%, ctx=61, majf=0, minf=68 00:20:55.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:55.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.304 issued rwts: total=14951,7843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.304 00:20:55.304 Run status group 0 (all jobs): 00:20:55.304 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=234MiB (245MB), run=2007-2007msec 00:20:55.304 WRITE: bw=68.3MiB/s (71.7MB/s), 68.3MiB/s-68.3MiB/s (71.7MB/s-71.7MB/s), io=123MiB (128MB), run=1793-1793msec 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.304 rmmod nvme_tcp 00:20:55.304 rmmod nvme_fabrics 00:20:55.304 rmmod nvme_keyring 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 274839 ']' 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 274839 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 274839 ']' 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 274839 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 274839 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 274839' 00:20:55.304 killing process with pid 274839 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 274839 00:20:55.304 [2024-05-16 20:19:42.231546] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:55.304 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 274839 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.563 20:19:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.465 20:19:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.465 00:20:57.465 real 0m10.050s 00:20:57.465 user 0m25.952s 00:20:57.465 sys 0m4.070s 00:20:57.465 20:19:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:57.465 20:19:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 ************************************ 00:20:57.465 END TEST nvmf_fio_host 00:20:57.465 ************************************ 00:20:57.465 20:19:44 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:57.465 20:19:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:57.465 20:19:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:57.465 20:19:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 ************************************ 00:20:57.465 START TEST nvmf_failover 00:20:57.465 ************************************ 00:20:57.465 20:19:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:57.723 * Looking for test storage... 00:20:57.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.723 20:19:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.724 20:19:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:59.624 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:59.624 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:59.624 Found net devices under 0000:09:00.0: cvl_0_0 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:59.624 Found net devices under 0000:09:00.1: cvl_0_1 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:20:59.624 00:20:59.624 --- 10.0.0.2 ping statistics --- 00:20:59.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.624 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:20:59.624 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:20:59.624 00:20:59.624 --- 10.0.0.1 ping statistics --- 00:20:59.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.625 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=277576 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 277576 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 277576 ']' 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:59.625 20:19:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.625 [2024-05-16 20:19:46.723669] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:59.625 [2024-05-16 20:19:46.723759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.625 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.883 [2024-05-16 20:19:46.793305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.883 [2024-05-16 20:19:46.912836] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.883 [2024-05-16 20:19:46.912923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.883 [2024-05-16 20:19:46.912940] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.883 [2024-05-16 20:19:46.912953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.883 [2024-05-16 20:19:46.912964] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.883 [2024-05-16 20:19:46.913072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.883 [2024-05-16 20:19:46.914873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.883 [2024-05-16 20:19:46.914886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.141 20:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.399 [2024-05-16 20:19:47.330457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.399 20:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:00.657 Malloc0 00:21:00.657 20:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.914 20:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.171 20:19:48 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.428 [2024-05-16 20:19:48.453553] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:01.428 [2024-05-16 20:19:48.453904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.428 20:19:48 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:01.685 [2024-05-16 20:19:48.742584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:01.685 20:19:48 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:01.943 [2024-05-16 20:19:49.011457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=277870 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 277870 /var/tmp/bdevperf.sock 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 277870 ']' 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:01.943 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.508 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:02.508 20:19:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:02.508 20:19:49 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.765 NVMe0n1 00:21:02.765 20:19:49 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:03.023 00:21:03.023 20:19:50 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=277996 00:21:03.023 20:19:50 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.023 20:19:50 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:03.955 20:19:51 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.212 [2024-05-16 20:19:51.294370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.212 [2024-05-16 20:19:51.294563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 [2024-05-16 20:19:51.294575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 [2024-05-16 20:19:51.294587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 [2024-05-16 20:19:51.294599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 [2024-05-16 20:19:51.294614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 [2024-05-16 20:19:51.294625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 [2024-05-16 20:19:51.294637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1b00 is same with the state(5) to be set 00:21:04.213 20:19:51 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:07.490 20:19:54 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:07.748 00:21:07.748 20:19:54 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:08.005 20:19:54 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:11.295 20:19:57 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.295 [2024-05-16 20:19:58.237171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.295 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:12.228 20:19:59 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:12.485 20:19:59 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 277996 00:21:19.048 0 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 277870 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 277870 ']' 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 277870 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 277870 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 277870' 00:21:19.048 killing process with pid 277870 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 277870 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 277870 00:21:19.048 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:19.048 [2024-05-16 20:19:49.074663] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:19.048 [2024-05-16 20:19:49.074746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277870 ] 00:21:19.048 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.048 [2024-05-16 20:19:49.140249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.048 [2024-05-16 20:19:49.249661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.048 Running I/O for 15 seconds... 00:21:19.048 [2024-05-16 20:19:51.295696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.295982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.295995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.296010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.296024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.048 [2024-05-16 20:19:51.296039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.048 [2024-05-16 20:19:51.296053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.049 [2024-05-16 20:19:51.296706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.296985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.049 [2024-05-16 20:19:51.297274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.049 [2024-05-16 20:19:51.297288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.050 [2024-05-16 20:19:51.297696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.297973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.297987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.050 [2024-05-16 20:19:51.298440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.050 [2024-05-16 20:19:51.298454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.051 [2024-05-16 20:19:51.298656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.298701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78728 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.298714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.298743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.298754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78736 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.298768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.298795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.298806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78744 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.298818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.298844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.298879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78752 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.298895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.298920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.298930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78760 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.298948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.298966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.298978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.298989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78768 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78784 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:21:19.051 [2024-05-16 20:19:51.299645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.051 [2024-05-16 20:19:51.299658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.051 [2024-05-16 20:19:51.299669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.051 [2024-05-16 20:19:51.299680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.299692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.299708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.299719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.299729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.299742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.299760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.299771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.299782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.299794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.299807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.299818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.299828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.299848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.299869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.299880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.299891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.299903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.299916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.299926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.299938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.299950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.299963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.299978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.299989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.300025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.300036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.300072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.300083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.300119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.300129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.300172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.300183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.300229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.300240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.052 [2024-05-16 20:19:51.300275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.052 [2024-05-16 20:19:51.300286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:21:19.052 [2024-05-16 20:19:51.300299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300358] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8de030 was disconnected and freed. reset controller. 00:21:19.052 [2024-05-16 20:19:51.300375] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:19.052 [2024-05-16 20:19:51.300407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.052 [2024-05-16 20:19:51.300425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.052 [2024-05-16 20:19:51.300462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.052 [2024-05-16 20:19:51.300488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.052 [2024-05-16 20:19:51.300515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:51.300528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.052 [2024-05-16 20:19:51.303872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.052 [2024-05-16 20:19:51.303909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf060 (9): Bad file descriptor 00:21:19.052 [2024-05-16 20:19:51.339105] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.052 [2024-05-16 20:19:54.962828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.052 [2024-05-16 20:19:54.962899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.962929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.052 [2024-05-16 20:19:54.962946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.962962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.052 [2024-05-16 20:19:54.962976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.962992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.052 [2024-05-16 20:19:54.963006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.052 [2024-05-16 20:19:54.963049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.052 [2024-05-16 20:19:54.963078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.052 [2024-05-16 20:19:54.963106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.052 [2024-05-16 20:19:54.963134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.052 [2024-05-16 20:19:54.963171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.052 [2024-05-16 20:19:54.963200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.052 [2024-05-16 20:19:54.963214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.052 [2024-05-16 20:19:54.963227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.963988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.964017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.053 [2024-05-16 20:19:54.964045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.053 [2024-05-16 20:19:54.964245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.053 [2024-05-16 20:19:54.964258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.964970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.964984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.054 [2024-05-16 20:19:54.965437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.054 [2024-05-16 20:19:54.965450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.965974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.965988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.055 [2024-05-16 20:19:54.966426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.055 [2024-05-16 20:19:54.966635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.055 [2024-05-16 20:19:54.966665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.055 [2024-05-16 20:19:54.966680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.056 [2024-05-16 20:19:54.966692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84560 len:8 PRP1 0x0 PRP2 0x0 00:21:19.056 [2024-05-16 20:19:54.966705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:54.966765] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa886b0 was disconnected and freed. reset controller. 00:21:19.056 [2024-05-16 20:19:54.966783] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:19.056 [2024-05-16 20:19:54.966815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.056 [2024-05-16 20:19:54.966832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:54.966848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.056 [2024-05-16 20:19:54.966872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:54.966896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.056 [2024-05-16 20:19:54.966916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:54.966930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.056 [2024-05-16 20:19:54.966943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:54.966957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.056 [2024-05-16 20:19:54.967001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf060 (9): Bad file descriptor 00:21:19.056 [2024-05-16 20:19:54.970307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.056 [2024-05-16 20:19:55.003087] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.056 [2024-05-16 20:19:59.486439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.486975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.486989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.487017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.487045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.487074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.056 [2024-05-16 20:19:59.487102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.056 [2024-05-16 20:19:59.487502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.056 [2024-05-16 20:19:59.487517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.487531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.487559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.487587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.487971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.487986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.057 [2024-05-16 20:19:59.488405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.488433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.488461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.488490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.488518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.057 [2024-05-16 20:19:59.488534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.057 [2024-05-16 20:19:59.488547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.488984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.488999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.058 [2024-05-16 20:19:59.489739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.058 [2024-05-16 20:19:59.489754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.489975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.489990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.059 [2024-05-16 20:19:59.490031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.059 [2024-05-16 20:19:59.490228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.059 [2024-05-16 20:19:59.490274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.059 [2024-05-16 20:19:59.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:21:19.059 [2024-05-16 20:19:59.490299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490365] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8e29f0 was disconnected and freed. reset controller. 00:21:19.059 [2024-05-16 20:19:59.490383] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:19.059 [2024-05-16 20:19:59.490418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.059 [2024-05-16 20:19:59.490437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.059 [2024-05-16 20:19:59.490465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.059 [2024-05-16 20:19:59.490491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.059 [2024-05-16 20:19:59.490517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.059 [2024-05-16 20:19:59.490530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.059 [2024-05-16 20:19:59.490585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf060 (9): Bad file descriptor 00:21:19.059 [2024-05-16 20:19:59.493866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.059 [2024-05-16 20:19:59.524018] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.059 00:21:19.059 Latency(us) 00:21:19.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.059 Verification LBA range: start 0x0 length 0x4000 00:21:19.059 NVMe0n1 : 15.00 8805.05 34.39 237.06 0.00 14127.86 688.73 16505.36 00:21:19.059 =================================================================================================================== 00:21:19.059 Total : 8805.05 34.39 237.06 0.00 14127.86 688.73 16505.36 00:21:19.059 Received shutdown signal, test time was about 15.000000 seconds 00:21:19.059 00:21:19.059 Latency(us) 00:21:19.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.059 =================================================================================================================== 00:21:19.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=279845 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 279845 /var/tmp/bdevperf.sock 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 279845 ']' 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:19.059 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.059 [2024-05-16 20:20:06.045739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:19.059 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:19.316 [2024-05-16 20:20:06.290411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:19.316 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.573 NVMe0n1 00:21:19.573 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.139 00:21:20.139 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.396 00:21:20.396 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.396 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:20.653 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.910 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:24.188 20:20:10 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.188 20:20:10 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:24.188 20:20:11 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=280507 00:21:24.188 20:20:11 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.188 20:20:11 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 280507 00:21:25.120 0 00:21:25.120 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.120 [2024-05-16 20:20:05.546355] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:25.120 [2024-05-16 20:20:05.546440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279845 ] 00:21:25.120 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.120 [2024-05-16 20:20:05.605876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.120 [2024-05-16 20:20:05.711922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.121 [2024-05-16 20:20:07.846258] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:25.121 [2024-05-16 20:20:07.846347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.121 [2024-05-16 20:20:07.846371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.121 [2024-05-16 20:20:07.846403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.121 [2024-05-16 20:20:07.846417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.121 [2024-05-16 20:20:07.846431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.121 [2024-05-16 20:20:07.846444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.121 [2024-05-16 20:20:07.846458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.121 [2024-05-16 20:20:07.846472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.121 [2024-05-16 20:20:07.846485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.121 [2024-05-16 20:20:07.846529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.121 [2024-05-16 20:20:07.846563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1052060 (9): Bad file descriptor 00:21:25.121 [2024-05-16 20:20:07.852748] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.121 Running I/O for 1 seconds... 00:21:25.121 00:21:25.121 Latency(us) 00:21:25.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.121 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:25.121 Verification LBA range: start 0x0 length 0x4000 00:21:25.121 NVMe0n1 : 1.01 8736.16 34.13 0.00 0.00 14582.35 2827.76 12379.02 00:21:25.121 =================================================================================================================== 00:21:25.121 Total : 8736.16 34.13 0.00 0.00 14582.35 2827.76 12379.02 00:21:25.121 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.121 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:25.378 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:25.634 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.634 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:25.891 20:20:12 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.148 20:20:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:29.421 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.421 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:29.421 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 279845 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 279845 ']' 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 279845 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 279845 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 279845' 00:21:29.422 killing process with pid 279845 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 279845 00:21:29.422 20:20:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 279845 00:21:29.679 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:29.679 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.937 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.195 rmmod nvme_tcp 00:21:30.195 rmmod nvme_fabrics 00:21:30.195 rmmod nvme_keyring 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 277576 ']' 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 277576 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 277576 ']' 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 277576 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 277576 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 277576' 00:21:30.195 killing process with pid 277576 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 277576 00:21:30.195 [2024-05-16 20:20:17.154910] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:30.195 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 277576 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.453 20:20:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.352 20:20:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.610 00:21:32.610 real 0m34.904s 00:21:32.610 user 2m2.936s 00:21:32.610 sys 0m5.837s 00:21:32.610 20:20:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:32.610 20:20:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:32.610 ************************************ 00:21:32.610 END TEST nvmf_failover 00:21:32.610 ************************************ 00:21:32.610 20:20:19 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:32.610 20:20:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:32.610 20:20:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:32.610 20:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:32.610 ************************************ 00:21:32.610 START TEST nvmf_host_discovery 00:21:32.610 ************************************ 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:32.610 * Looking for test storage... 00:21:32.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.610 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.611 20:20:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.509 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:34.510 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:34.510 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:34.510 Found net devices under 0000:09:00.0: cvl_0_0 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:34.510 Found net devices under 0000:09:00.1: cvl_0_1 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.510 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:34.767 00:21:34.767 --- 10.0.0.2 ping statistics --- 00:21:34.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.767 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:34.767 00:21:34.767 --- 10.0.0.1 ping statistics --- 00:21:34.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.767 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:34.767 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=283113 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 283113 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 283113 ']' 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:34.768 20:20:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.768 [2024-05-16 20:20:21.789073] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:34.768 [2024-05-16 20:20:21.789146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.768 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.768 [2024-05-16 20:20:21.857768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.025 [2024-05-16 20:20:21.977902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.025 [2024-05-16 20:20:21.977963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.025 [2024-05-16 20:20:21.977980] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.025 [2024-05-16 20:20:21.977994] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.025 [2024-05-16 20:20:21.978006] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.025 [2024-05-16 20:20:21.978036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.025 [2024-05-16 20:20:22.133896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.025 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 [2024-05-16 20:20:22.141813] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:35.026 [2024-05-16 20:20:22.142138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 null0 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 null1 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=283252 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 283252 /tmp/host.sock 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 283252 ']' 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:35.284 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:35.284 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.284 [2024-05-16 20:20:22.218286] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:35.284 [2024-05-16 20:20:22.218368] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283252 ] 00:21:35.284 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.284 [2024-05-16 20:20:22.282930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.284 [2024-05-16 20:20:22.395599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.540 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:35.540 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:21:35.540 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.540 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:35.540 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.541 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.798 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:35.798 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 [2024-05-16 20:20:22.807822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.799 20:20:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.056 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.056 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:21:36.056 20:20:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:21:36.621 [2024-05-16 20:20:23.581508] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:36.621 [2024-05-16 20:20:23.581540] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:36.621 [2024-05-16 20:20:23.581571] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:36.621 [2024-05-16 20:20:23.668834] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:36.621 [2024-05-16 20:20:23.732675] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:36.621 [2024-05-16 20:20:23.732702] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.878 20:20:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:37.135 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.136 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:37.394 20:20:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.355 [2024-05-16 20:20:25.479675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.355 [2024-05-16 20:20:25.480346] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:38.355 [2024-05-16 20:20:25.480398] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.355 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.631 [2024-05-16 20:20:25.566569] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:38.631 20:20:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:21:38.903 [2024-05-16 20:20:25.869045] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:38.903 [2024-05-16 20:20:25.869069] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:38.903 [2024-05-16 20:20:25.869079] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:39.505 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.764 [2024-05-16 20:20:26.703752] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:39.764 [2024-05-16 20:20:26.703803] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.764 [2024-05-16 20:20:26.704307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.764 [2024-05-16 20:20:26.704342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.764 [2024-05-16 20:20:26.704374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.764 [2024-05-16 20:20:26.704388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.764 [2024-05-16 20:20:26.704404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.764 [2024-05-16 20:20:26.704417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.764 [2024-05-16 20:20:26.704432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.764 [2024-05-16 20:20:26.704445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.764 [2024-05-16 20:20:26.704458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.764 [2024-05-16 20:20:26.714294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.764 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.764 [2024-05-16 20:20:26.724340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.764 [2024-05-16 20:20:26.724580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.724610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.724628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.724652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.724674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.724696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.724714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.724735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 [2024-05-16 20:20:26.734433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.765 [2024-05-16 20:20:26.734594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.734622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.734639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.734661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.734681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.734695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.734709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.734727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 [2024-05-16 20:20:26.744517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.765 [2024-05-16 20:20:26.744687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.744715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.744732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.744755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.744776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.744790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.744803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.744822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.765 [2024-05-16 20:20:26.754599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.765 [2024-05-16 20:20:26.754793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.754832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.754848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.754881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.754919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.754937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.754951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.754970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 [2024-05-16 20:20:26.764687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.765 [2024-05-16 20:20:26.764877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.764905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.764922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.764945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.764994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.765013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.765026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.765046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 [2024-05-16 20:20:26.774772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.765 [2024-05-16 20:20:26.774956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.774984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.775000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.775022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.775056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.775074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.775087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.775119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.765 [2024-05-16 20:20:26.784867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.765 [2024-05-16 20:20:26.785011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.765 [2024-05-16 20:20:26.785038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46620 with addr=10.0.0.2, port=4420 00:21:39.765 [2024-05-16 20:20:26.785054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46620 is same with the state(5) to be set 00:21:39.765 [2024-05-16 20:20:26.785075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46620 (9): Bad file descriptor 00:21:39.765 [2024-05-16 20:20:26.785107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.765 [2024-05-16 20:20:26.785125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.765 [2024-05-16 20:20:26.785137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.765 [2024-05-16 20:20:26.785156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.765 [2024-05-16 20:20:26.791124] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:39.765 [2024-05-16 20:20:26.791164] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:39.765 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.766 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.023 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:21:40.023 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.024 20:20:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.024 20:20:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.955 [2024-05-16 20:20:28.073010] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:40.955 [2024-05-16 20:20:28.073045] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:40.955 [2024-05-16 20:20:28.073069] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:41.213 [2024-05-16 20:20:28.159389] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:41.471 [2024-05-16 20:20:28.421270] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:41.471 [2024-05-16 20:20:28.421331] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.471 request: 00:21:41.471 { 00:21:41.471 "name": "nvme", 00:21:41.471 "trtype": "tcp", 00:21:41.471 "traddr": "10.0.0.2", 00:21:41.471 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:41.471 "adrfam": "ipv4", 00:21:41.471 "trsvcid": "8009", 00:21:41.471 "wait_for_attach": true, 00:21:41.471 "method": "bdev_nvme_start_discovery", 00:21:41.471 "req_id": 1 00:21:41.471 } 00:21:41.471 Got JSON-RPC error response 00:21:41.471 response: 00:21:41.471 { 00:21:41.471 "code": -17, 00:21:41.471 "message": "File exists" 00:21:41.471 } 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.471 request: 00:21:41.471 { 00:21:41.471 "name": "nvme_second", 00:21:41.471 "trtype": "tcp", 00:21:41.471 "traddr": "10.0.0.2", 00:21:41.471 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:41.471 "adrfam": "ipv4", 00:21:41.471 "trsvcid": "8009", 00:21:41.471 "wait_for_attach": true, 00:21:41.471 "method": "bdev_nvme_start_discovery", 00:21:41.471 "req_id": 1 00:21:41.471 } 00:21:41.471 Got JSON-RPC error response 00:21:41.471 response: 00:21:41.471 { 00:21:41.471 "code": -17, 00:21:41.471 "message": "File exists" 00:21:41.471 } 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:41.471 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.729 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.729 20:20:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.730 20:20:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.661 [2024-05-16 20:20:29.636739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.661 [2024-05-16 20:20:29.636807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c62e90 with addr=10.0.0.2, port=8010 00:21:42.661 [2024-05-16 20:20:29.636843] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:42.661 [2024-05-16 20:20:29.636870] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:42.661 [2024-05-16 20:20:29.636920] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:43.593 [2024-05-16 20:20:30.639295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-05-16 20:20:30.639368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c62e90 with addr=10.0.0.2, port=8010 00:21:43.593 [2024-05-16 20:20:30.639400] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:43.593 [2024-05-16 20:20:30.639415] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:43.593 [2024-05-16 20:20:30.639428] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:44.525 [2024-05-16 20:20:31.641396] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:44.525 request: 00:21:44.525 { 00:21:44.525 "name": "nvme_second", 00:21:44.525 "trtype": "tcp", 00:21:44.525 "traddr": "10.0.0.2", 00:21:44.525 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:44.525 "adrfam": "ipv4", 00:21:44.525 "trsvcid": "8010", 00:21:44.525 "attach_timeout_ms": 3000, 00:21:44.525 "method": "bdev_nvme_start_discovery", 00:21:44.525 "req_id": 1 00:21:44.525 } 00:21:44.525 Got JSON-RPC error response 00:21:44.525 response: 00:21:44.525 { 00:21:44.525 "code": -110, 00:21:44.525 "message": "Connection timed out" 00:21:44.525 } 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:44.525 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 283252 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.783 rmmod nvme_tcp 00:21:44.783 rmmod nvme_fabrics 00:21:44.783 rmmod nvme_keyring 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 283113 ']' 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 283113 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 283113 ']' 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 283113 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 283113 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 283113' 00:21:44.783 killing process with pid 283113 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 283113 00:21:44.783 [2024-05-16 20:20:31.771396] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:44.783 20:20:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 283113 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.041 20:20:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.589 00:21:47.589 real 0m14.549s 00:21:47.589 user 0m21.607s 00:21:47.589 sys 0m3.045s 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.589 ************************************ 00:21:47.589 END TEST nvmf_host_discovery 00:21:47.589 ************************************ 00:21:47.589 20:20:34 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:47.589 20:20:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:47.589 20:20:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:47.589 20:20:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.589 ************************************ 00:21:47.589 START TEST nvmf_host_multipath_status 00:21:47.589 ************************************ 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:47.589 * Looking for test storage... 00:21:47.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.589 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.590 20:20:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.490 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:49.491 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:49.491 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:49.491 Found net devices under 0000:09:00.0: cvl_0_0 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:49.491 Found net devices under 0000:09:00.1: cvl_0_1 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:21:49.491 00:21:49.491 --- 10.0.0.2 ping statistics --- 00:21:49.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.491 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:21:49.491 00:21:49.491 --- 10.0.0.1 ping statistics --- 00:21:49.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.491 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=286433 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 286433 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 286433 ']' 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:49.491 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.491 [2024-05-16 20:20:36.366661] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:49.491 [2024-05-16 20:20:36.366738] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.491 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.491 [2024-05-16 20:20:36.431989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:49.491 [2024-05-16 20:20:36.548251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.491 [2024-05-16 20:20:36.548316] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.491 [2024-05-16 20:20:36.548332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.491 [2024-05-16 20:20:36.548346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.491 [2024-05-16 20:20:36.548357] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.492 [2024-05-16 20:20:36.548441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.492 [2024-05-16 20:20:36.548448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=286433 00:21:49.749 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:50.007 [2024-05-16 20:20:36.927866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.007 20:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:50.265 Malloc0 00:21:50.265 20:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:50.523 20:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.780 20:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.037 [2024-05-16 20:20:37.946007] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:51.037 [2024-05-16 20:20:37.946280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.037 20:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:51.295 [2024-05-16 20:20:38.186932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=286713 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 286713 /var/tmp/bdevperf.sock 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 286713 ']' 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:51.295 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.554 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:51.554 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:21:51.554 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:51.811 20:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:52.069 Nvme0n1 00:21:52.069 20:20:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:52.635 Nvme0n1 00:21:52.635 20:20:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:52.635 20:20:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:54.533 20:20:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:54.533 20:20:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:54.792 20:20:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:55.049 20:20:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.423 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:56.679 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.679 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:56.679 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.679 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:56.936 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.936 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:56.936 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.936 20:20:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.193 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.193 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.193 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.193 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:57.450 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.450 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:57.450 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.450 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:57.708 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.708 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:57.708 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:57.966 20:20:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:58.223 20:20:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:59.156 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:59.156 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:59.156 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.156 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:59.413 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:59.413 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:59.413 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.413 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:59.670 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.670 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:59.670 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.670 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:59.928 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.928 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:59.928 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.928 20:20:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:00.185 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.185 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:00.185 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.185 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:00.443 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.443 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:00.443 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.443 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:00.700 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.700 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:00.700 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:00.958 20:20:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:01.216 20:20:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:02.151 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:02.151 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:02.151 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.151 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:02.409 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.409 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:02.409 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.409 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:02.667 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:02.667 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:02.667 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.667 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:02.925 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.925 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:02.925 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.925 20:20:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:03.183 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.183 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:03.183 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.183 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:03.440 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.440 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:03.440 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.440 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:03.698 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.698 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:03.698 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:03.956 20:20:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:04.214 20:20:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:05.148 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:05.148 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:05.148 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.148 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:05.407 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.407 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:05.407 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.407 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:05.665 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:05.665 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:05.665 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.665 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:05.923 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.923 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:05.923 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.923 20:20:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.181 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.181 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:06.181 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.181 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.439 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.439 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:06.439 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.439 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:06.696 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.696 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:06.696 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:06.953 20:20:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:07.209 20:20:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:08.141 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:08.141 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:08.141 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.141 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.399 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.399 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.399 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.399 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.657 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.657 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.657 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.657 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.914 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.914 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.914 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.914 20:20:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.172 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.172 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:09.172 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.172 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.430 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.430 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:09.430 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.430 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.687 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.687 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:09.687 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:09.687 20:20:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:09.944 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.334 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:11.592 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.592 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:11.592 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.592 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.849 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.849 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.849 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.849 20:20:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.108 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.108 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:12.108 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.108 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.367 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.367 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:12.367 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.367 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.625 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.625 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:12.883 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:12.883 20:20:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:13.141 20:21:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:13.399 20:21:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:14.333 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:14.333 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.333 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.333 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.591 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.591 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:14.591 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.591 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.849 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.849 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.849 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.850 20:21:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.107 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.107 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.107 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.107 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.365 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.365 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.365 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.365 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.622 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.623 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.623 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.623 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.881 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.881 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:15.881 20:21:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:16.139 20:21:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:16.398 20:21:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:17.331 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:17.331 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.331 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.331 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.588 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.588 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:17.588 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.588 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.845 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.845 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.845 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.845 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.103 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.103 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.103 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.103 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.360 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.360 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.360 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.360 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.617 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.617 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:18.617 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.617 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.875 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.875 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:18.875 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:19.132 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:19.390 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.763 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.021 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.021 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.021 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.021 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.278 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.278 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.278 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.278 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.538 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.538 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:21.538 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.538 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.804 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.804 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:21.804 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.804 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.062 20:21:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.062 20:21:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:22.062 20:21:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:22.320 20:21:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:22.577 20:21:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:23.509 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:23.509 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:23.509 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.509 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.767 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.767 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:23.767 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.767 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.024 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.024 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.024 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.024 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.282 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.282 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.282 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.282 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.539 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.539 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:24.539 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.539 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.797 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.797 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:24.797 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.797 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 286713 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 286713 ']' 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 286713 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 286713 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 286713' 00:22:25.054 killing process with pid 286713 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 286713 00:22:25.054 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 286713 00:22:25.334 Connection closed with partial response: 00:22:25.334 00:22:25.334 00:22:25.334 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 286713 00:22:25.334 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:25.334 [2024-05-16 20:20:38.244242] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:25.334 [2024-05-16 20:20:38.244328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286713 ] 00:22:25.334 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.334 [2024-05-16 20:20:38.304101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.334 [2024-05-16 20:20:38.411044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.334 Running I/O for 90 seconds... 00:22:25.334 [2024-05-16 20:20:53.881459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.334 [2024-05-16 20:20:53.881530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.334 [2024-05-16 20:20:53.881569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.334 [2024-05-16 20:20:53.881588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.334 [2024-05-16 20:20:53.881612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.334 [2024-05-16 20:20:53.881629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.334 [2024-05-16 20:20:53.881652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.334 [2024-05-16 20:20:53.881670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.334 [2024-05-16 20:20:53.881693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.334 [2024-05-16 20:20:53.881709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.881748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.881788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.881827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.881875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.881916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.881962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.881986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.882963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.882978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.335 [2024-05-16 20:20:53.883270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.335 [2024-05-16 20:20:53.883291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.883317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.883336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.883370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.883388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.883413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.883433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.884961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.336 [2024-05-16 20:20:53.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.336 [2024-05-16 20:20:53.885876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.885902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.885918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.886968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.886984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.337 [2024-05-16 20:20:53.887860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.337 [2024-05-16 20:20:53.887877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.887900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.887916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.887938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.887954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.887976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.887992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.338 [2024-05-16 20:20:53.888029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.888982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.888997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.889019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.889035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.889057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.889072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.889094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.889109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.889132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.889151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.889970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.889993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.890020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.890038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.890060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.890076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.338 [2024-05-16 20:20:53.890098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.338 [2024-05-16 20:20:53.890113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.890968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.890990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.339 [2024-05-16 20:20:53.891369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.339 [2024-05-16 20:20:53.891389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.891977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.891999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.892958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.892973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.340 [2024-05-16 20:20:53.893605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.340 [2024-05-16 20:20:53.893626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.893975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.893999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.341 [2024-05-16 20:20:53.894480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.894962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.894984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.895023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.895062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.895101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.895140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.895178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.341 [2024-05-16 20:20:53.895217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.341 [2024-05-16 20:20:53.895234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.895571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.895587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.896982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.896998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.342 [2024-05-16 20:20:53.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.342 [2024-05-16 20:20:53.897593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.897968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.897996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.898534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.898552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.343 [2024-05-16 20:20:53.899599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.343 [2024-05-16 20:20:53.899615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.899965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.899982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.900967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.900988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.344 [2024-05-16 20:20:53.901004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.901042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.901079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.901117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.901154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.901207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.344 [2024-05-16 20:20:53.901248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.344 [2024-05-16 20:20:53.901270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.901976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.901992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.902014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.902030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.902052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.902068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.902888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.902911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.902938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.902956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.902979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.902995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.345 [2024-05-16 20:20:53.903613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.345 [2024-05-16 20:20:53.903635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.903967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.903989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.904978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.904994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.346 [2024-05-16 20:20:53.905969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.346 [2024-05-16 20:20:53.905984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.906968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.906984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.347 [2024-05-16 20:20:53.907404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.347 [2024-05-16 20:20:53.907426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.348 [2024-05-16 20:20:53.907648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.907963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.907984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.908691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.348 [2024-05-16 20:20:53.909798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.348 [2024-05-16 20:20:53.909820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.909835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.909866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.909884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.909907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.909923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.909950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.909966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.909989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.910973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.910994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.349 [2024-05-16 20:20:53.917763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.349 [2024-05-16 20:20:53.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.917802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.917823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.917863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.917890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.917906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.917928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.917944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.917966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.917982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.918974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.918990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.919982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.919998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.920020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.920035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.350 [2024-05-16 20:20:53.920057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.350 [2024-05-16 20:20:53.920073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.351 [2024-05-16 20:20:53.920617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.920971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.920987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.921592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.921607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.922405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.922429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.922455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.351 [2024-05-16 20:20:53.922474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.351 [2024-05-16 20:20:53.922496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.922969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.922985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.352 [2024-05-16 20:20:53.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.352 [2024-05-16 20:20:53.923884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.923900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.923922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.923939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.923960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.923976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.923998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.924537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.924553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.925975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.925991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.926014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.926029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.926052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.353 [2024-05-16 20:20:53.926067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.353 [2024-05-16 20:20:53.926089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.926969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.926991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.354 [2024-05-16 20:20:53.927083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.354 [2024-05-16 20:20:53.927645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.354 [2024-05-16 20:20:53.927660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.927967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.927983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.928006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.928022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.928827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.928859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.928889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.928934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.928951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.928973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.928989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.929974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.929997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.930012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.930035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.355 [2024-05-16 20:20:53.930050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.355 [2024-05-16 20:20:53.930072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.930974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.930990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.931973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.931995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.932011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.932037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.932076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.932093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.932115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.932131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.356 [2024-05-16 20:20:53.932153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.356 [2024-05-16 20:20:53.932168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.932968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.932990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.357 [2024-05-16 20:20:53.933612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.357 [2024-05-16 20:20:53.933765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.357 [2024-05-16 20:20:53.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.933801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.933823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.933863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.933889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.933905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.933927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.933943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.933965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.933980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.934551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.934567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.935970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.935992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.936008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.936031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.936047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.358 [2024-05-16 20:20:53.936085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.358 [2024-05-16 20:20:53.936108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.936975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.936991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.937533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.359 [2024-05-16 20:20:53.937550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.359 [2024-05-16 20:20:53.938200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.938975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.938990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.360 [2024-05-16 20:20:53.939550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.360 [2024-05-16 20:20:53.939565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.939976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.939998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.361 [2024-05-16 20:20:53.940180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.940965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.940980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.941002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.941017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.941843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.941875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.941903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.941920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.361 [2024-05-16 20:20:53.941943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.361 [2024-05-16 20:20:53.941959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.941982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.941997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.942884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.942902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.949967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.949983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.950004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.950020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.950047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.362 [2024-05-16 20:20:53.950063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:25.362 [2024-05-16 20:20:53.950086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.950968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.950995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.951960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.951976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.952001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.952016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.952041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.363 [2024-05-16 20:20:53.952056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.363 [2024-05-16 20:20:53.952082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.952960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.952986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.364 [2024-05-16 20:20:53.953043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.364 [2024-05-16 20:20:53.953741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.364 [2024-05-16 20:20:53.953765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:20:53.953780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:20:53.953805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:20:53.953820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:20:53.953867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:20:53.953885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:20:53.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:20:53.953945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:20:53.954128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:20:53.954167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.553737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.553802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.553885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.553907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.553932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.553950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.553973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.553989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.554878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.365 [2024-05-16 20:21:09.554924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.365 [2024-05-16 20:21:09.554963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.554985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.555002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.365 [2024-05-16 20:21:09.555040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:25.365 [2024-05-16 20:21:09.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.555539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.555556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.556752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.556769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.557825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.557849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.557886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.557904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.557933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.557950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.557972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.557989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.366 [2024-05-16 20:21:09.558027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.558066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.558104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.558143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.558186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.558224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.366 [2024-05-16 20:21:09.558262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:25.366 [2024-05-16 20:21:09.558284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.367 [2024-05-16 20:21:09.558539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:25.367 [2024-05-16 20:21:09.558562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.367 [2024-05-16 20:21:09.558578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:25.367 Received shutdown signal, test time was about 32.288678 seconds 00:22:25.367 00:22:25.367 Latency(us) 00:22:25.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.367 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.367 Verification LBA range: start 0x0 length 0x4000 00:22:25.367 Nvme0n1 : 32.29 8108.09 31.67 0.00 0.00 15759.22 1013.38 4101097.24 00:22:25.367 =================================================================================================================== 00:22:25.367 Total : 8108.09 31.67 0.00 0.00 15759.22 1013.38 4101097.24 00:22:25.367 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.624 rmmod nvme_tcp 00:22:25.624 rmmod nvme_fabrics 00:22:25.624 rmmod nvme_keyring 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 286433 ']' 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 286433 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 286433 ']' 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 286433 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 286433 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 286433' 00:22:25.624 killing process with pid 286433 00:22:25.624 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 286433 00:22:25.625 [2024-05-16 20:21:12.699796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:25.625 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 286433 00:22:25.882 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.882 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.882 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.883 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.883 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.883 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.883 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.883 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.410 20:21:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.410 00:22:28.410 real 0m40.884s 00:22:28.410 user 2m3.336s 00:22:28.410 sys 0m10.272s 00:22:28.410 20:21:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:28.410 20:21:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:28.410 ************************************ 00:22:28.410 END TEST nvmf_host_multipath_status 00:22:28.410 ************************************ 00:22:28.410 20:21:15 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:28.410 20:21:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:28.410 20:21:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.410 20:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.410 ************************************ 00:22:28.410 START TEST nvmf_discovery_remove_ifc 00:22:28.410 ************************************ 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:28.410 * Looking for test storage... 00:22:28.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.410 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.411 20:21:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:30.315 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:30.315 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:30.315 Found net devices under 0000:09:00.0: cvl_0_0 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:30.315 Found net devices under 0000:09:00.1: cvl_0_1 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.315 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:22:30.316 00:22:30.316 --- 10.0.0.2 ping statistics --- 00:22:30.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.316 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:22:30.316 00:22:30.316 --- 10.0.0.1 ping statistics --- 00:22:30.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.316 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=293480 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 293480 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 293480 ']' 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.316 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.316 [2024-05-16 20:21:17.289437] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:30.316 [2024-05-16 20:21:17.289525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.316 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.316 [2024-05-16 20:21:17.361339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.575 [2024-05-16 20:21:17.469741] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.575 [2024-05-16 20:21:17.469789] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.576 [2024-05-16 20:21:17.469817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.576 [2024-05-16 20:21:17.469829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.576 [2024-05-16 20:21:17.469848] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.576 [2024-05-16 20:21:17.469898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.576 [2024-05-16 20:21:17.615538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.576 [2024-05-16 20:21:17.623504] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:30.576 [2024-05-16 20:21:17.623763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:30.576 null0 00:22:30.576 [2024-05-16 20:21:17.655679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=293573 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 293573 /tmp/host.sock 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 293573 ']' 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:30.576 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.576 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.576 [2024-05-16 20:21:17.718496] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:30.576 [2024-05-16 20:21:17.718573] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293573 ] 00:22:30.834 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.834 [2024-05-16 20:21:17.780454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.834 [2024-05-16 20:21:17.897501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.834 20:21:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:31.091 20:21:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.091 20:21:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:31.091 20:21:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.091 20:21:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.023 [2024-05-16 20:21:19.103007] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:32.023 [2024-05-16 20:21:19.103050] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:32.023 [2024-05-16 20:21:19.103075] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:32.280 [2024-05-16 20:21:19.189348] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:32.280 [2024-05-16 20:21:19.366986] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:32.280 [2024-05-16 20:21:19.367056] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:32.280 [2024-05-16 20:21:19.367093] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:32.280 [2024-05-16 20:21:19.367116] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:32.280 [2024-05-16 20:21:19.367170] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.280 [2024-05-16 20:21:19.371776] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7d7de0 was disconnected and freed. delete nvme_qpair. 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:32.280 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.538 20:21:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:33.469 20:21:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.840 20:21:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.773 20:21:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.706 20:21:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.638 20:21:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.896 [2024-05-16 20:21:24.808334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:37.896 [2024-05-16 20:21:24.808398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.896 [2024-05-16 20:21:24.808423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.896 [2024-05-16 20:21:24.808443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.896 [2024-05-16 20:21:24.808459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.896 [2024-05-16 20:21:24.808474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.896 [2024-05-16 20:21:24.808489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.896 [2024-05-16 20:21:24.808504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.896 [2024-05-16 20:21:24.808519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.896 [2024-05-16 20:21:24.808535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.896 [2024-05-16 20:21:24.808557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.896 [2024-05-16 20:21:24.808571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79f1a0 is same with the state(5) to be set 00:22:37.896 [2024-05-16 20:21:24.818354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79f1a0 (9): Bad file descriptor 00:22:37.896 [2024-05-16 20:21:24.828403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.828 [2024-05-16 20:21:25.877880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:38.828 [2024-05-16 20:21:25.877929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79f1a0 with addr=10.0.0.2, port=4420 00:22:38.828 [2024-05-16 20:21:25.877952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79f1a0 is same with the state(5) to be set 00:22:38.828 [2024-05-16 20:21:25.877986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79f1a0 (9): Bad file descriptor 00:22:38.828 [2024-05-16 20:21:25.878370] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.828 [2024-05-16 20:21:25.878405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.828 [2024-05-16 20:21:25.878423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.828 [2024-05-16 20:21:25.878442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.828 [2024-05-16 20:21:25.878467] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.828 [2024-05-16 20:21:25.878486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.828 20:21:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.764 [2024-05-16 20:21:26.881013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.764 [2024-05-16 20:21:26.881114] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:39.764 [2024-05-16 20:21:26.881189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.764 [2024-05-16 20:21:26.881215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.764 [2024-05-16 20:21:26.881238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.764 [2024-05-16 20:21:26.881253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.764 [2024-05-16 20:21:26.881269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.764 [2024-05-16 20:21:26.881292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.764 [2024-05-16 20:21:26.881308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.764 [2024-05-16 20:21:26.881323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.764 [2024-05-16 20:21:26.881339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.764 [2024-05-16 20:21:26.881354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.764 [2024-05-16 20:21:26.881370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:39.764 [2024-05-16 20:21:26.881524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e630 (9): Bad file descriptor 00:22:39.764 [2024-05-16 20:21:26.882543] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:39.764 [2024-05-16 20:21:26.882568] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.764 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.021 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.021 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:40.021 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.022 20:21:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.022 20:21:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:40.022 20:21:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:40.953 20:21:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.937 [2024-05-16 20:21:28.939500] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:41.937 [2024-05-16 20:21:28.939535] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:41.937 [2024-05-16 20:21:28.939563] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.937 [2024-05-16 20:21:29.065973] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:41.937 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.259 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.259 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:42.259 20:21:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.259 [2024-05-16 20:21:29.250239] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:42.259 [2024-05-16 20:21:29.250297] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:42.260 [2024-05-16 20:21:29.250334] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:42.260 [2024-05-16 20:21:29.250362] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:42.260 [2024-05-16 20:21:29.250379] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:42.260 [2024-05-16 20:21:29.257757] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7ad260 was disconnected and freed. delete nvme_qpair. 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 293573 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 293573 ']' 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 293573 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 293573 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 293573' 00:22:43.193 killing process with pid 293573 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 293573 00:22:43.193 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 293573 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.450 rmmod nvme_tcp 00:22:43.450 rmmod nvme_fabrics 00:22:43.450 rmmod nvme_keyring 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 293480 ']' 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 293480 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 293480 ']' 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 293480 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 293480 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 293480' 00:22:43.450 killing process with pid 293480 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 293480 00:22:43.450 [2024-05-16 20:21:30.528590] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:43.450 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 293480 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.709 20:21:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.239 20:21:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.239 00:22:46.239 real 0m17.770s 00:22:46.239 user 0m25.892s 00:22:46.239 sys 0m2.949s 00:22:46.239 20:21:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:46.239 20:21:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.239 ************************************ 00:22:46.239 END TEST nvmf_discovery_remove_ifc 00:22:46.239 ************************************ 00:22:46.239 20:21:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:46.239 20:21:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:46.239 20:21:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:46.239 20:21:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.239 ************************************ 00:22:46.239 START TEST nvmf_identify_kernel_target 00:22:46.239 ************************************ 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:46.239 * Looking for test storage... 00:22:46.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.239 20:21:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:48.137 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:48.137 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:48.137 Found net devices under 0000:09:00.0: cvl_0_0 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:48.137 Found net devices under 0000:09:00.1: cvl_0_1 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.137 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.138 20:21:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:22:48.138 00:22:48.138 --- 10.0.0.2 ping statistics --- 00:22:48.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.138 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:22:48.138 00:22:48.138 --- 10.0.0.1 ping statistics --- 00:22:48.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.138 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:48.138 20:21:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:49.072 Waiting for block devices as requested 00:22:49.072 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:49.072 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:49.330 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:49.330 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:49.330 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:49.330 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:49.588 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:49.588 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:49.588 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:49.588 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:49.847 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:49.847 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:49.847 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:49.847 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:50.105 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:50.105 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:50.105 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:50.364 No valid GPT data, bailing 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:50.364 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:50.364 00:22:50.364 Discovery Log Number of Records 2, Generation counter 2 00:22:50.364 =====Discovery Log Entry 0====== 00:22:50.364 trtype: tcp 00:22:50.364 adrfam: ipv4 00:22:50.364 subtype: current discovery subsystem 00:22:50.364 treq: not specified, sq flow control disable supported 00:22:50.364 portid: 1 00:22:50.364 trsvcid: 4420 00:22:50.364 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:50.364 traddr: 10.0.0.1 00:22:50.364 eflags: none 00:22:50.364 sectype: none 00:22:50.364 =====Discovery Log Entry 1====== 00:22:50.364 trtype: tcp 00:22:50.364 adrfam: ipv4 00:22:50.364 subtype: nvme subsystem 00:22:50.364 treq: not specified, sq flow control disable supported 00:22:50.364 portid: 1 00:22:50.364 trsvcid: 4420 00:22:50.364 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:50.364 traddr: 10.0.0.1 00:22:50.365 eflags: none 00:22:50.365 sectype: none 00:22:50.365 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:50.365 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:50.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.365 ===================================================== 00:22:50.365 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:50.365 ===================================================== 00:22:50.365 Controller Capabilities/Features 00:22:50.365 ================================ 00:22:50.365 Vendor ID: 0000 00:22:50.365 Subsystem Vendor ID: 0000 00:22:50.365 Serial Number: f46ebe9d7748f8de56e8 00:22:50.365 Model Number: Linux 00:22:50.365 Firmware Version: 6.7.0-68 00:22:50.365 Recommended Arb Burst: 0 00:22:50.365 IEEE OUI Identifier: 00 00 00 00:22:50.365 Multi-path I/O 00:22:50.365 May have multiple subsystem ports: No 00:22:50.365 May have multiple controllers: No 00:22:50.365 Associated with SR-IOV VF: No 00:22:50.365 Max Data Transfer Size: Unlimited 00:22:50.365 Max Number of Namespaces: 0 00:22:50.365 Max Number of I/O Queues: 1024 00:22:50.365 NVMe Specification Version (VS): 1.3 00:22:50.365 NVMe Specification Version (Identify): 1.3 00:22:50.365 Maximum Queue Entries: 1024 00:22:50.365 Contiguous Queues Required: No 00:22:50.365 Arbitration Mechanisms Supported 00:22:50.365 Weighted Round Robin: Not Supported 00:22:50.365 Vendor Specific: Not Supported 00:22:50.365 Reset Timeout: 7500 ms 00:22:50.365 Doorbell Stride: 4 bytes 00:22:50.365 NVM Subsystem Reset: Not Supported 00:22:50.365 Command Sets Supported 00:22:50.365 NVM Command Set: Supported 00:22:50.365 Boot Partition: Not Supported 00:22:50.365 Memory Page Size Minimum: 4096 bytes 00:22:50.365 Memory Page Size Maximum: 4096 bytes 00:22:50.365 Persistent Memory Region: Not Supported 00:22:50.365 Optional Asynchronous Events Supported 00:22:50.365 Namespace Attribute Notices: Not Supported 00:22:50.365 Firmware Activation Notices: Not Supported 00:22:50.365 ANA Change Notices: Not Supported 00:22:50.365 PLE Aggregate Log Change Notices: Not Supported 00:22:50.365 LBA Status Info Alert Notices: Not Supported 00:22:50.365 EGE Aggregate Log Change Notices: Not Supported 00:22:50.365 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.365 Zone Descriptor Change Notices: Not Supported 00:22:50.365 Discovery Log Change Notices: Supported 00:22:50.365 Controller Attributes 00:22:50.365 128-bit Host Identifier: Not Supported 00:22:50.365 Non-Operational Permissive Mode: Not Supported 00:22:50.365 NVM Sets: Not Supported 00:22:50.365 Read Recovery Levels: Not Supported 00:22:50.365 Endurance Groups: Not Supported 00:22:50.365 Predictable Latency Mode: Not Supported 00:22:50.365 Traffic Based Keep ALive: Not Supported 00:22:50.365 Namespace Granularity: Not Supported 00:22:50.365 SQ Associations: Not Supported 00:22:50.365 UUID List: Not Supported 00:22:50.365 Multi-Domain Subsystem: Not Supported 00:22:50.365 Fixed Capacity Management: Not Supported 00:22:50.365 Variable Capacity Management: Not Supported 00:22:50.365 Delete Endurance Group: Not Supported 00:22:50.365 Delete NVM Set: Not Supported 00:22:50.365 Extended LBA Formats Supported: Not Supported 00:22:50.365 Flexible Data Placement Supported: Not Supported 00:22:50.365 00:22:50.365 Controller Memory Buffer Support 00:22:50.365 ================================ 00:22:50.365 Supported: No 00:22:50.365 00:22:50.365 Persistent Memory Region Support 00:22:50.365 ================================ 00:22:50.365 Supported: No 00:22:50.365 00:22:50.365 Admin Command Set Attributes 00:22:50.365 ============================ 00:22:50.365 Security Send/Receive: Not Supported 00:22:50.365 Format NVM: Not Supported 00:22:50.365 Firmware Activate/Download: Not Supported 00:22:50.365 Namespace Management: Not Supported 00:22:50.365 Device Self-Test: Not Supported 00:22:50.365 Directives: Not Supported 00:22:50.365 NVMe-MI: Not Supported 00:22:50.365 Virtualization Management: Not Supported 00:22:50.365 Doorbell Buffer Config: Not Supported 00:22:50.365 Get LBA Status Capability: Not Supported 00:22:50.365 Command & Feature Lockdown Capability: Not Supported 00:22:50.365 Abort Command Limit: 1 00:22:50.365 Async Event Request Limit: 1 00:22:50.365 Number of Firmware Slots: N/A 00:22:50.365 Firmware Slot 1 Read-Only: N/A 00:22:50.365 Firmware Activation Without Reset: N/A 00:22:50.365 Multiple Update Detection Support: N/A 00:22:50.365 Firmware Update Granularity: No Information Provided 00:22:50.365 Per-Namespace SMART Log: No 00:22:50.365 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.365 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:50.365 Command Effects Log Page: Not Supported 00:22:50.365 Get Log Page Extended Data: Supported 00:22:50.365 Telemetry Log Pages: Not Supported 00:22:50.365 Persistent Event Log Pages: Not Supported 00:22:50.365 Supported Log Pages Log Page: May Support 00:22:50.365 Commands Supported & Effects Log Page: Not Supported 00:22:50.365 Feature Identifiers & Effects Log Page:May Support 00:22:50.365 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.365 Data Area 4 for Telemetry Log: Not Supported 00:22:50.365 Error Log Page Entries Supported: 1 00:22:50.365 Keep Alive: Not Supported 00:22:50.365 00:22:50.365 NVM Command Set Attributes 00:22:50.365 ========================== 00:22:50.365 Submission Queue Entry Size 00:22:50.365 Max: 1 00:22:50.365 Min: 1 00:22:50.365 Completion Queue Entry Size 00:22:50.365 Max: 1 00:22:50.365 Min: 1 00:22:50.365 Number of Namespaces: 0 00:22:50.365 Compare Command: Not Supported 00:22:50.365 Write Uncorrectable Command: Not Supported 00:22:50.365 Dataset Management Command: Not Supported 00:22:50.365 Write Zeroes Command: Not Supported 00:22:50.365 Set Features Save Field: Not Supported 00:22:50.365 Reservations: Not Supported 00:22:50.365 Timestamp: Not Supported 00:22:50.365 Copy: Not Supported 00:22:50.365 Volatile Write Cache: Not Present 00:22:50.365 Atomic Write Unit (Normal): 1 00:22:50.365 Atomic Write Unit (PFail): 1 00:22:50.365 Atomic Compare & Write Unit: 1 00:22:50.365 Fused Compare & Write: Not Supported 00:22:50.365 Scatter-Gather List 00:22:50.365 SGL Command Set: Supported 00:22:50.365 SGL Keyed: Not Supported 00:22:50.365 SGL Bit Bucket Descriptor: Not Supported 00:22:50.365 SGL Metadata Pointer: Not Supported 00:22:50.365 Oversized SGL: Not Supported 00:22:50.365 SGL Metadata Address: Not Supported 00:22:50.365 SGL Offset: Supported 00:22:50.365 Transport SGL Data Block: Not Supported 00:22:50.365 Replay Protected Memory Block: Not Supported 00:22:50.365 00:22:50.365 Firmware Slot Information 00:22:50.365 ========================= 00:22:50.365 Active slot: 0 00:22:50.365 00:22:50.365 00:22:50.365 Error Log 00:22:50.365 ========= 00:22:50.365 00:22:50.365 Active Namespaces 00:22:50.365 ================= 00:22:50.365 Discovery Log Page 00:22:50.365 ================== 00:22:50.365 Generation Counter: 2 00:22:50.365 Number of Records: 2 00:22:50.365 Record Format: 0 00:22:50.365 00:22:50.365 Discovery Log Entry 0 00:22:50.365 ---------------------- 00:22:50.365 Transport Type: 3 (TCP) 00:22:50.365 Address Family: 1 (IPv4) 00:22:50.365 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:50.365 Entry Flags: 00:22:50.365 Duplicate Returned Information: 0 00:22:50.365 Explicit Persistent Connection Support for Discovery: 0 00:22:50.365 Transport Requirements: 00:22:50.365 Secure Channel: Not Specified 00:22:50.365 Port ID: 1 (0x0001) 00:22:50.365 Controller ID: 65535 (0xffff) 00:22:50.365 Admin Max SQ Size: 32 00:22:50.365 Transport Service Identifier: 4420 00:22:50.365 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:50.365 Transport Address: 10.0.0.1 00:22:50.365 Discovery Log Entry 1 00:22:50.365 ---------------------- 00:22:50.365 Transport Type: 3 (TCP) 00:22:50.365 Address Family: 1 (IPv4) 00:22:50.365 Subsystem Type: 2 (NVM Subsystem) 00:22:50.365 Entry Flags: 00:22:50.365 Duplicate Returned Information: 0 00:22:50.365 Explicit Persistent Connection Support for Discovery: 0 00:22:50.365 Transport Requirements: 00:22:50.365 Secure Channel: Not Specified 00:22:50.365 Port ID: 1 (0x0001) 00:22:50.365 Controller ID: 65535 (0xffff) 00:22:50.365 Admin Max SQ Size: 32 00:22:50.365 Transport Service Identifier: 4420 00:22:50.365 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:50.365 Transport Address: 10.0.0.1 00:22:50.365 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:50.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.365 get_feature(0x01) failed 00:22:50.365 get_feature(0x02) failed 00:22:50.365 get_feature(0x04) failed 00:22:50.365 ===================================================== 00:22:50.365 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:50.365 ===================================================== 00:22:50.365 Controller Capabilities/Features 00:22:50.365 ================================ 00:22:50.366 Vendor ID: 0000 00:22:50.366 Subsystem Vendor ID: 0000 00:22:50.366 Serial Number: 816dc9dd2a937e04fb63 00:22:50.366 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:50.366 Firmware Version: 6.7.0-68 00:22:50.366 Recommended Arb Burst: 6 00:22:50.366 IEEE OUI Identifier: 00 00 00 00:22:50.366 Multi-path I/O 00:22:50.366 May have multiple subsystem ports: Yes 00:22:50.366 May have multiple controllers: Yes 00:22:50.366 Associated with SR-IOV VF: No 00:22:50.366 Max Data Transfer Size: Unlimited 00:22:50.366 Max Number of Namespaces: 1024 00:22:50.366 Max Number of I/O Queues: 128 00:22:50.366 NVMe Specification Version (VS): 1.3 00:22:50.366 NVMe Specification Version (Identify): 1.3 00:22:50.366 Maximum Queue Entries: 1024 00:22:50.366 Contiguous Queues Required: No 00:22:50.366 Arbitration Mechanisms Supported 00:22:50.366 Weighted Round Robin: Not Supported 00:22:50.366 Vendor Specific: Not Supported 00:22:50.366 Reset Timeout: 7500 ms 00:22:50.366 Doorbell Stride: 4 bytes 00:22:50.366 NVM Subsystem Reset: Not Supported 00:22:50.366 Command Sets Supported 00:22:50.366 NVM Command Set: Supported 00:22:50.366 Boot Partition: Not Supported 00:22:50.366 Memory Page Size Minimum: 4096 bytes 00:22:50.366 Memory Page Size Maximum: 4096 bytes 00:22:50.366 Persistent Memory Region: Not Supported 00:22:50.366 Optional Asynchronous Events Supported 00:22:50.366 Namespace Attribute Notices: Supported 00:22:50.366 Firmware Activation Notices: Not Supported 00:22:50.366 ANA Change Notices: Supported 00:22:50.366 PLE Aggregate Log Change Notices: Not Supported 00:22:50.366 LBA Status Info Alert Notices: Not Supported 00:22:50.366 EGE Aggregate Log Change Notices: Not Supported 00:22:50.366 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.366 Zone Descriptor Change Notices: Not Supported 00:22:50.366 Discovery Log Change Notices: Not Supported 00:22:50.366 Controller Attributes 00:22:50.366 128-bit Host Identifier: Supported 00:22:50.366 Non-Operational Permissive Mode: Not Supported 00:22:50.366 NVM Sets: Not Supported 00:22:50.366 Read Recovery Levels: Not Supported 00:22:50.366 Endurance Groups: Not Supported 00:22:50.366 Predictable Latency Mode: Not Supported 00:22:50.366 Traffic Based Keep ALive: Supported 00:22:50.366 Namespace Granularity: Not Supported 00:22:50.366 SQ Associations: Not Supported 00:22:50.366 UUID List: Not Supported 00:22:50.366 Multi-Domain Subsystem: Not Supported 00:22:50.366 Fixed Capacity Management: Not Supported 00:22:50.366 Variable Capacity Management: Not Supported 00:22:50.366 Delete Endurance Group: Not Supported 00:22:50.366 Delete NVM Set: Not Supported 00:22:50.366 Extended LBA Formats Supported: Not Supported 00:22:50.366 Flexible Data Placement Supported: Not Supported 00:22:50.366 00:22:50.366 Controller Memory Buffer Support 00:22:50.366 ================================ 00:22:50.366 Supported: No 00:22:50.366 00:22:50.366 Persistent Memory Region Support 00:22:50.366 ================================ 00:22:50.366 Supported: No 00:22:50.366 00:22:50.366 Admin Command Set Attributes 00:22:50.366 ============================ 00:22:50.366 Security Send/Receive: Not Supported 00:22:50.366 Format NVM: Not Supported 00:22:50.366 Firmware Activate/Download: Not Supported 00:22:50.366 Namespace Management: Not Supported 00:22:50.366 Device Self-Test: Not Supported 00:22:50.366 Directives: Not Supported 00:22:50.366 NVMe-MI: Not Supported 00:22:50.366 Virtualization Management: Not Supported 00:22:50.366 Doorbell Buffer Config: Not Supported 00:22:50.366 Get LBA Status Capability: Not Supported 00:22:50.366 Command & Feature Lockdown Capability: Not Supported 00:22:50.366 Abort Command Limit: 4 00:22:50.366 Async Event Request Limit: 4 00:22:50.366 Number of Firmware Slots: N/A 00:22:50.366 Firmware Slot 1 Read-Only: N/A 00:22:50.366 Firmware Activation Without Reset: N/A 00:22:50.366 Multiple Update Detection Support: N/A 00:22:50.366 Firmware Update Granularity: No Information Provided 00:22:50.366 Per-Namespace SMART Log: Yes 00:22:50.366 Asymmetric Namespace Access Log Page: Supported 00:22:50.366 ANA Transition Time : 10 sec 00:22:50.366 00:22:50.366 Asymmetric Namespace Access Capabilities 00:22:50.366 ANA Optimized State : Supported 00:22:50.366 ANA Non-Optimized State : Supported 00:22:50.366 ANA Inaccessible State : Supported 00:22:50.366 ANA Persistent Loss State : Supported 00:22:50.366 ANA Change State : Supported 00:22:50.366 ANAGRPID is not changed : No 00:22:50.366 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:50.366 00:22:50.366 ANA Group Identifier Maximum : 128 00:22:50.366 Number of ANA Group Identifiers : 128 00:22:50.366 Max Number of Allowed Namespaces : 1024 00:22:50.366 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:50.366 Command Effects Log Page: Supported 00:22:50.366 Get Log Page Extended Data: Supported 00:22:50.366 Telemetry Log Pages: Not Supported 00:22:50.366 Persistent Event Log Pages: Not Supported 00:22:50.366 Supported Log Pages Log Page: May Support 00:22:50.366 Commands Supported & Effects Log Page: Not Supported 00:22:50.366 Feature Identifiers & Effects Log Page:May Support 00:22:50.366 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.366 Data Area 4 for Telemetry Log: Not Supported 00:22:50.366 Error Log Page Entries Supported: 128 00:22:50.366 Keep Alive: Supported 00:22:50.366 Keep Alive Granularity: 1000 ms 00:22:50.366 00:22:50.366 NVM Command Set Attributes 00:22:50.366 ========================== 00:22:50.366 Submission Queue Entry Size 00:22:50.366 Max: 64 00:22:50.366 Min: 64 00:22:50.366 Completion Queue Entry Size 00:22:50.366 Max: 16 00:22:50.366 Min: 16 00:22:50.366 Number of Namespaces: 1024 00:22:50.366 Compare Command: Not Supported 00:22:50.366 Write Uncorrectable Command: Not Supported 00:22:50.366 Dataset Management Command: Supported 00:22:50.366 Write Zeroes Command: Supported 00:22:50.366 Set Features Save Field: Not Supported 00:22:50.366 Reservations: Not Supported 00:22:50.366 Timestamp: Not Supported 00:22:50.366 Copy: Not Supported 00:22:50.366 Volatile Write Cache: Present 00:22:50.366 Atomic Write Unit (Normal): 1 00:22:50.366 Atomic Write Unit (PFail): 1 00:22:50.366 Atomic Compare & Write Unit: 1 00:22:50.366 Fused Compare & Write: Not Supported 00:22:50.366 Scatter-Gather List 00:22:50.366 SGL Command Set: Supported 00:22:50.366 SGL Keyed: Not Supported 00:22:50.366 SGL Bit Bucket Descriptor: Not Supported 00:22:50.366 SGL Metadata Pointer: Not Supported 00:22:50.366 Oversized SGL: Not Supported 00:22:50.366 SGL Metadata Address: Not Supported 00:22:50.366 SGL Offset: Supported 00:22:50.366 Transport SGL Data Block: Not Supported 00:22:50.366 Replay Protected Memory Block: Not Supported 00:22:50.366 00:22:50.366 Firmware Slot Information 00:22:50.366 ========================= 00:22:50.366 Active slot: 0 00:22:50.366 00:22:50.366 Asymmetric Namespace Access 00:22:50.366 =========================== 00:22:50.366 Change Count : 0 00:22:50.366 Number of ANA Group Descriptors : 1 00:22:50.366 ANA Group Descriptor : 0 00:22:50.366 ANA Group ID : 1 00:22:50.366 Number of NSID Values : 1 00:22:50.366 Change Count : 0 00:22:50.366 ANA State : 1 00:22:50.366 Namespace Identifier : 1 00:22:50.366 00:22:50.366 Commands Supported and Effects 00:22:50.366 ============================== 00:22:50.366 Admin Commands 00:22:50.366 -------------- 00:22:50.366 Get Log Page (02h): Supported 00:22:50.366 Identify (06h): Supported 00:22:50.366 Abort (08h): Supported 00:22:50.366 Set Features (09h): Supported 00:22:50.366 Get Features (0Ah): Supported 00:22:50.366 Asynchronous Event Request (0Ch): Supported 00:22:50.366 Keep Alive (18h): Supported 00:22:50.366 I/O Commands 00:22:50.366 ------------ 00:22:50.366 Flush (00h): Supported 00:22:50.366 Write (01h): Supported LBA-Change 00:22:50.366 Read (02h): Supported 00:22:50.366 Write Zeroes (08h): Supported LBA-Change 00:22:50.366 Dataset Management (09h): Supported 00:22:50.366 00:22:50.366 Error Log 00:22:50.366 ========= 00:22:50.366 Entry: 0 00:22:50.366 Error Count: 0x3 00:22:50.366 Submission Queue Id: 0x0 00:22:50.366 Command Id: 0x5 00:22:50.366 Phase Bit: 0 00:22:50.366 Status Code: 0x2 00:22:50.366 Status Code Type: 0x0 00:22:50.366 Do Not Retry: 1 00:22:50.366 Error Location: 0x28 00:22:50.366 LBA: 0x0 00:22:50.366 Namespace: 0x0 00:22:50.366 Vendor Log Page: 0x0 00:22:50.366 ----------- 00:22:50.366 Entry: 1 00:22:50.366 Error Count: 0x2 00:22:50.366 Submission Queue Id: 0x0 00:22:50.366 Command Id: 0x5 00:22:50.366 Phase Bit: 0 00:22:50.366 Status Code: 0x2 00:22:50.366 Status Code Type: 0x0 00:22:50.366 Do Not Retry: 1 00:22:50.366 Error Location: 0x28 00:22:50.366 LBA: 0x0 00:22:50.366 Namespace: 0x0 00:22:50.366 Vendor Log Page: 0x0 00:22:50.366 ----------- 00:22:50.366 Entry: 2 00:22:50.366 Error Count: 0x1 00:22:50.366 Submission Queue Id: 0x0 00:22:50.366 Command Id: 0x4 00:22:50.366 Phase Bit: 0 00:22:50.366 Status Code: 0x2 00:22:50.367 Status Code Type: 0x0 00:22:50.367 Do Not Retry: 1 00:22:50.367 Error Location: 0x28 00:22:50.367 LBA: 0x0 00:22:50.367 Namespace: 0x0 00:22:50.367 Vendor Log Page: 0x0 00:22:50.367 00:22:50.367 Number of Queues 00:22:50.367 ================ 00:22:50.367 Number of I/O Submission Queues: 128 00:22:50.367 Number of I/O Completion Queues: 128 00:22:50.367 00:22:50.367 ZNS Specific Controller Data 00:22:50.367 ============================ 00:22:50.367 Zone Append Size Limit: 0 00:22:50.367 00:22:50.367 00:22:50.367 Active Namespaces 00:22:50.367 ================= 00:22:50.367 get_feature(0x05) failed 00:22:50.367 Namespace ID:1 00:22:50.367 Command Set Identifier: NVM (00h) 00:22:50.367 Deallocate: Supported 00:22:50.367 Deallocated/Unwritten Error: Not Supported 00:22:50.367 Deallocated Read Value: Unknown 00:22:50.367 Deallocate in Write Zeroes: Not Supported 00:22:50.367 Deallocated Guard Field: 0xFFFF 00:22:50.367 Flush: Supported 00:22:50.367 Reservation: Not Supported 00:22:50.367 Namespace Sharing Capabilities: Multiple Controllers 00:22:50.367 Size (in LBAs): 1953525168 (931GiB) 00:22:50.367 Capacity (in LBAs): 1953525168 (931GiB) 00:22:50.367 Utilization (in LBAs): 1953525168 (931GiB) 00:22:50.367 UUID: d179615b-941b-4db9-8fe6-66545d3e90d4 00:22:50.367 Thin Provisioning: Not Supported 00:22:50.367 Per-NS Atomic Units: Yes 00:22:50.367 Atomic Boundary Size (Normal): 0 00:22:50.367 Atomic Boundary Size (PFail): 0 00:22:50.367 Atomic Boundary Offset: 0 00:22:50.367 NGUID/EUI64 Never Reused: No 00:22:50.367 ANA group ID: 1 00:22:50.367 Namespace Write Protected: No 00:22:50.367 Number of LBA Formats: 1 00:22:50.367 Current LBA Format: LBA Format #00 00:22:50.367 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:50.367 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.367 rmmod nvme_tcp 00:22:50.367 rmmod nvme_fabrics 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.367 20:21:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:52.896 20:21:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:53.831 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:53.831 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:53.831 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:54.766 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:22:54.766 00:22:54.766 real 0m8.854s 00:22:54.766 user 0m1.879s 00:22:54.766 sys 0m3.097s 00:22:54.766 20:21:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:54.766 20:21:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.766 ************************************ 00:22:54.766 END TEST nvmf_identify_kernel_target 00:22:54.766 ************************************ 00:22:54.766 20:21:41 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:54.766 20:21:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:54.766 20:21:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:54.766 20:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.766 ************************************ 00:22:54.766 START TEST nvmf_auth_host 00:22:54.766 ************************************ 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:54.766 * Looking for test storage... 00:22:54.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.766 20:21:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.767 20:21:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:56.663 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:56.663 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:56.663 Found net devices under 0000:09:00.0: cvl_0_0 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:56.663 Found net devices under 0000:09:00.1: cvl_0_1 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.663 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:22:56.938 00:22:56.938 --- 10.0.0.2 ping statistics --- 00:22:56.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.938 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:22:56.938 00:22:56.938 --- 10.0.0.1 ping statistics --- 00:22:56.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.938 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=300575 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 300575 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 300575 ']' 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:56.938 20:21:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.873 20:21:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:57.873 20:21:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:22:57.873 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.873 20:21:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.873 20:21:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=95449807e4792d7b493e7959cb0d429f 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CoE 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 95449807e4792d7b493e7959cb0d429f 0 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 95449807e4792d7b493e7959cb0d429f 0 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=95449807e4792d7b493e7959cb0d429f 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:57.874 20:21:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CoE 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CoE 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CoE 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e59f7909e11d7a50f463e049082adb3f09b2eda950bb9dfce31b3f01ae42bdc4 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uEQ 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e59f7909e11d7a50f463e049082adb3f09b2eda950bb9dfce31b3f01ae42bdc4 3 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e59f7909e11d7a50f463e049082adb3f09b2eda950bb9dfce31b3f01ae42bdc4 3 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e59f7909e11d7a50f463e049082adb3f09b2eda950bb9dfce31b3f01ae42bdc4 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:57.874 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.132 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uEQ 00:22:58.132 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uEQ 00:22:58.132 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uEQ 00:22:58.132 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:58.132 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.132 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd495398f522c7648632c38b37e5ae91642ca466a0d9e160 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.P8o 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd495398f522c7648632c38b37e5ae91642ca466a0d9e160 0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd495398f522c7648632c38b37e5ae91642ca466a0d9e160 0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd495398f522c7648632c38b37e5ae91642ca466a0d9e160 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.P8o 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.P8o 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.P8o 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c6e3893fb5cc9b0fa7a18ae6531dc3c937516b1205711ef 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NM1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c6e3893fb5cc9b0fa7a18ae6531dc3c937516b1205711ef 2 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c6e3893fb5cc9b0fa7a18ae6531dc3c937516b1205711ef 2 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c6e3893fb5cc9b0fa7a18ae6531dc3c937516b1205711ef 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NM1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NM1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.NM1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20d0b36773ba99b4ac3dcaede70e6eee 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sQ0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20d0b36773ba99b4ac3dcaede70e6eee 1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20d0b36773ba99b4ac3dcaede70e6eee 1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20d0b36773ba99b4ac3dcaede70e6eee 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sQ0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sQ0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.sQ0 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1979c2dcdaec7dc032d6ad54de2ccecf 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GIb 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1979c2dcdaec7dc032d6ad54de2ccecf 1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1979c2dcdaec7dc032d6ad54de2ccecf 1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1979c2dcdaec7dc032d6ad54de2ccecf 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GIb 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GIb 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.GIb 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0f61a0c2b83f1f12cb3e6c01f2610af46e61e6f2bad8f06 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OTw 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0f61a0c2b83f1f12cb3e6c01f2610af46e61e6f2bad8f06 2 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0f61a0c2b83f1f12cb3e6c01f2610af46e61e6f2bad8f06 2 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0f61a0c2b83f1f12cb3e6c01f2610af46e61e6f2bad8f06 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:58.133 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OTw 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OTw 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OTw 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5bd378ececb9988b5fb078dc0910beb 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Rmq 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5bd378ececb9988b5fb078dc0910beb 0 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5bd378ececb9988b5fb078dc0910beb 0 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5bd378ececb9988b5fb078dc0910beb 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Rmq 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Rmq 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Rmq 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4313044ce7e457bbe4619c75642246f67bb7a598d7041a0e760554d057fa4e42 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xy4 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4313044ce7e457bbe4619c75642246f67bb7a598d7041a0e760554d057fa4e42 3 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4313044ce7e457bbe4619c75642246f67bb7a598d7041a0e760554d057fa4e42 3 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4313044ce7e457bbe4619c75642246f67bb7a598d7041a0e760554d057fa4e42 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xy4 00:22:58.391 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xy4 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xy4 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 300575 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 300575 ']' 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:58.392 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CoE 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uEQ ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uEQ 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.P8o 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.NM1 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NM1 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.sQ0 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.GIb ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GIb 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OTw 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Rmq ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Rmq 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xy4 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.650 20:21:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:00.024 Waiting for block devices as requested 00:23:00.024 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:00.024 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:00.024 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:00.024 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:00.024 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:00.282 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:00.282 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:00.282 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:00.282 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:00.540 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:00.540 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:00.540 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:00.798 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:00.798 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:00.798 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:00.798 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:01.057 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:01.315 20:21:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:01.573 No valid GPT data, bailing 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:01.573 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:01.573 00:23:01.573 Discovery Log Number of Records 2, Generation counter 2 00:23:01.573 =====Discovery Log Entry 0====== 00:23:01.573 trtype: tcp 00:23:01.573 adrfam: ipv4 00:23:01.573 subtype: current discovery subsystem 00:23:01.573 treq: not specified, sq flow control disable supported 00:23:01.573 portid: 1 00:23:01.573 trsvcid: 4420 00:23:01.574 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:01.574 traddr: 10.0.0.1 00:23:01.574 eflags: none 00:23:01.574 sectype: none 00:23:01.574 =====Discovery Log Entry 1====== 00:23:01.574 trtype: tcp 00:23:01.574 adrfam: ipv4 00:23:01.574 subtype: nvme subsystem 00:23:01.574 treq: not specified, sq flow control disable supported 00:23:01.574 portid: 1 00:23:01.574 trsvcid: 4420 00:23:01.574 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:01.574 traddr: 10.0.0.1 00:23:01.574 eflags: none 00:23:01.574 sectype: none 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.574 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.832 nvme0n1 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.832 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.833 nvme0n1 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.833 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.091 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.091 20:21:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.091 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.091 20:21:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.091 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.092 nvme0n1 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.092 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.350 nvme0n1 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:02.350 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.351 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.609 nvme0n1 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.609 nvme0n1 00:23:02.609 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.610 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.868 20:21:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.868 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.126 nvme0n1 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.126 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.127 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.385 nvme0n1 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.385 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.644 nvme0n1 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.644 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.903 nvme0n1 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.903 20:21:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.904 20:21:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:03.904 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.904 20:21:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.162 nvme0n1 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.162 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.748 20:21:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.006 nvme0n1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.006 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.264 nvme0n1 00:23:05.264 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.264 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.264 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.264 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.264 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.264 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.522 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.523 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 nvme0n1 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.780 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.781 20:21:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.037 nvme0n1 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.037 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.038 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.295 nvme0n1 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.295 20:21:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.194 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.761 nvme0n1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.761 20:21:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 nvme0n1 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.327 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 nvme0n1 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.893 20:21:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.465 nvme0n1 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.465 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.031 nvme0n1 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.031 20:21:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.964 nvme0n1 00:23:11.964 20:21:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.964 20:21:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.964 20:21:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.964 20:21:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.964 20:21:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.964 20:21:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.964 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.965 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.898 nvme0n1 00:23:12.898 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.898 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.899 20:21:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.829 nvme0n1 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:13.829 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.830 20:22:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.770 nvme0n1 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.770 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.027 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.027 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.027 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.028 20:22:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.960 nvme0n1 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:15.960 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.961 20:22:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.961 nvme0n1 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.961 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 nvme0n1 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.219 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.220 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 nvme0n1 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.478 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.737 nvme0n1 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.737 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.737 nvme0n1 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.738 20:22:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 nvme0n1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.996 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.254 nvme0n1 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.254 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.255 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.514 nvme0n1 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.514 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.772 nvme0n1 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.772 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.031 nvme0n1 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:18.031 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.032 20:22:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 nvme0n1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.291 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 nvme0n1 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.549 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.550 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.808 nvme0n1 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.808 20:22:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.066 nvme0n1 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.066 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.324 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.583 nvme0n1 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.583 20:22:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.151 nvme0n1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.151 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 nvme0n1 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.717 20:22:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.283 nvme0n1 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.283 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.849 nvme0n1 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.849 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.850 20:22:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.416 nvme0n1 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.416 20:22:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.351 nvme0n1 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:23.351 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.352 20:22:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.286 nvme0n1 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.286 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.287 20:22:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.220 nvme0n1 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.220 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.221 20:22:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.155 nvme0n1 00:23:26.155 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.155 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.155 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.155 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.155 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.155 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.413 20:22:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.360 nvme0n1 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.360 nvme0n1 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.360 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:27.618 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.619 nvme0n1 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.619 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.877 nvme0n1 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.877 20:22:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.134 nvme0n1 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.134 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.135 nvme0n1 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.135 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.392 nvme0n1 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.392 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.677 nvme0n1 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.677 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.935 nvme0n1 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.935 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.936 20:22:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.936 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.194 nvme0n1 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.194 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.195 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.195 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.195 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.195 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.195 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.453 nvme0n1 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.453 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.454 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.712 nvme0n1 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.712 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.713 20:22:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.971 nvme0n1 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.971 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.229 nvme0n1 00:23:30.229 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.229 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.229 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.229 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.229 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.229 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.487 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.745 nvme0n1 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:30.745 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.746 20:22:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.003 nvme0n1 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:31.003 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.004 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.568 nvme0n1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.568 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.135 nvme0n1 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.135 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.702 nvme0n1 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.702 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.960 20:22:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.218 nvme0n1 00:23:33.218 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.218 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.218 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.218 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.218 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.218 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.476 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.734 nvme0n1 00:23:33.734 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.734 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.734 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.734 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.734 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0NDk4MDdlNDc5MmQ3YjQ5M2U3OTU5Y2IwZDQyOWZ1oTW/: 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU5Zjc5MDllMTFkN2E1MGY0NjNlMDQ5MDgyYWRiM2YwOWIyZWRhOTUwYmI5ZGZjZTMxYjNmMDFhZTQyYmRjNB+qpgg=: 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.992 20:22:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 nvme0n1 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.926 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.927 20:22:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 nvme0n1 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkMGIzNjc3M2JhOTliNGFjM2RjYWVkZTcwZTZlZWWan5yt: 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OWMyZGNkYWVjN2RjMDMyZDZhZDU0ZGUyY2NlY2avDmD5: 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 20:22:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.792 nvme0n1 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.792 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBmNjFhMGMyYjgzZjFmMTJjYjNlNmMwMWYyNjEwYWY0NmU2MWU2ZjJiYWQ4ZjA2cxM50g==: 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjViZDM3OGVjZWNiOTk4OGI1ZmIwNzhkYzA5MTBiZWJm3kIx: 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.793 20:22:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.726 nvme0n1 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDMxMzA0NGNlN2U0NTdiYmU0NjE5Yzc1NjQyMjQ2ZjY3YmI3YTU5OGQ3MDQxYTBlNzYwNTU0ZDA1N2ZhNGU0MjyMtEw=: 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.726 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.727 20:22:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 nvme0n1 00:23:38.661 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.661 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.661 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.661 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.661 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0OTUzOThmNTIyYzc2NDg2MzJjMzhiMzdlNWFlOTE2NDJjYTQ2NmEwZDllMTYwcdYazQ==: 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM2ZTM4OTNmYjVjYzliMGZhN2ExOGFlNjUzMWRjM2M5Mzc1MTZiMTIwNTcxMWVmKqvKWQ==: 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:38.921 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.922 request: 00:23:38.922 { 00:23:38.922 "name": "nvme0", 00:23:38.922 "trtype": "tcp", 00:23:38.922 "traddr": "10.0.0.1", 00:23:38.922 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.922 "adrfam": "ipv4", 00:23:38.922 "trsvcid": "4420", 00:23:38.922 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.922 "method": "bdev_nvme_attach_controller", 00:23:38.922 "req_id": 1 00:23:38.922 } 00:23:38.922 Got JSON-RPC error response 00:23:38.922 response: 00:23:38.922 { 00:23:38.922 "code": -32602, 00:23:38.922 "message": "Invalid parameters" 00:23:38.922 } 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.922 request: 00:23:38.922 { 00:23:38.922 "name": "nvme0", 00:23:38.922 "trtype": "tcp", 00:23:38.922 "traddr": "10.0.0.1", 00:23:38.922 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.922 "adrfam": "ipv4", 00:23:38.922 "trsvcid": "4420", 00:23:38.922 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.922 "dhchap_key": "key2", 00:23:38.922 "method": "bdev_nvme_attach_controller", 00:23:38.922 "req_id": 1 00:23:38.922 } 00:23:38.922 Got JSON-RPC error response 00:23:38.922 response: 00:23:38.922 { 00:23:38.922 "code": -32602, 00:23:38.922 "message": "Invalid parameters" 00:23:38.922 } 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.922 request: 00:23:38.922 { 00:23:38.922 "name": "nvme0", 00:23:38.922 "trtype": "tcp", 00:23:38.922 "traddr": "10.0.0.1", 00:23:38.922 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.922 "adrfam": "ipv4", 00:23:38.922 "trsvcid": "4420", 00:23:38.922 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.922 "dhchap_key": "key1", 00:23:38.922 "dhchap_ctrlr_key": "ckey2", 00:23:38.922 "method": "bdev_nvme_attach_controller", 00:23:38.922 "req_id": 1 00:23:38.922 } 00:23:38.922 Got JSON-RPC error response 00:23:38.922 response: 00:23:38.922 { 00:23:38.922 "code": -32602, 00:23:38.922 "message": "Invalid parameters" 00:23:38.922 } 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.922 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.922 rmmod nvme_tcp 00:23:38.922 rmmod nvme_fabrics 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 300575 ']' 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 300575 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 300575 ']' 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 300575 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 300575 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:38.922 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 300575' 00:23:38.922 killing process with pid 300575 00:23:38.923 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 300575 00:23:38.923 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 300575 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.181 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:41.715 20:22:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:42.650 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:42.650 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:42.650 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:43.857 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:43.857 20:22:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CoE /tmp/spdk.key-null.P8o /tmp/spdk.key-sha256.sQ0 /tmp/spdk.key-sha384.OTw /tmp/spdk.key-sha512.xy4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:43.857 20:22:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:44.790 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:44.790 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:44.790 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:44.790 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:44.790 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:44.790 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:44.790 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:44.790 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:44.790 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:44.790 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:44.790 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:44.790 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:44.790 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:44.790 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:44.790 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:44.790 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:44.790 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:44.790 00:23:44.790 real 0m50.073s 00:23:44.790 user 0m48.020s 00:23:44.790 sys 0m5.620s 00:23:44.790 20:22:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:44.790 20:22:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.790 ************************************ 00:23:44.790 END TEST nvmf_auth_host 00:23:44.790 ************************************ 00:23:44.790 20:22:31 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:23:44.790 20:22:31 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:44.790 20:22:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:44.790 20:22:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:44.790 20:22:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.049 ************************************ 00:23:45.049 START TEST nvmf_digest 00:23:45.049 ************************************ 00:23:45.049 20:22:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:45.049 * Looking for test storage... 00:23:45.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.049 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:45.050 20:22:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:46.956 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:46.956 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:46.956 Found net devices under 0000:09:00.0: cvl_0_0 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:46.956 Found net devices under 0000:09:00.1: cvl_0_1 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.956 20:22:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:23:46.956 00:23:46.956 --- 10.0.0.2 ping statistics --- 00:23:46.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.956 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:46.956 00:23:46.956 --- 10.0.0.1 ping statistics --- 00:23:46.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.956 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.956 20:22:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 ************************************ 00:23:47.215 START TEST nvmf_digest_clean 00:23:47.215 ************************************ 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=310110 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 310110 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 310110 ']' 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:47.215 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.216 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:47.216 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:47.216 [2024-05-16 20:22:34.188464] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:47.216 [2024-05-16 20:22:34.188555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.216 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.216 [2024-05-16 20:22:34.253003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.484 [2024-05-16 20:22:34.363649] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.484 [2024-05-16 20:22:34.363717] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.484 [2024-05-16 20:22:34.363732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.484 [2024-05-16 20:22:34.363758] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.484 [2024-05-16 20:22:34.363769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.484 [2024-05-16 20:22:34.363798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:47.484 null0 00:23:47.484 [2024-05-16 20:22:34.535199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.484 [2024-05-16 20:22:34.559187] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:47.484 [2024-05-16 20:22:34.559428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=310131 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 310131 /var/tmp/bperf.sock 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 310131 ']' 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:47.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:47.484 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:47.484 [2024-05-16 20:22:34.608578] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:47.484 [2024-05-16 20:22:34.608652] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310131 ] 00:23:47.829 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.829 [2024-05-16 20:22:34.671290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.829 [2024-05-16 20:22:34.784249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.829 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.829 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:47.829 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:47.829 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:47.829 20:22:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:48.145 20:22:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:48.145 20:22:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:48.459 nvme0n1 00:23:48.459 20:22:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:48.459 20:22:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:48.735 Running I/O for 2 seconds... 00:23:50.635 00:23:50.635 Latency(us) 00:23:50.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.635 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:50.635 nvme0n1 : 2.00 18870.56 73.71 0.00 0.00 6772.02 3422.44 19126.80 00:23:50.635 =================================================================================================================== 00:23:50.635 Total : 18870.56 73.71 0.00 0.00 6772.02 3422.44 19126.80 00:23:50.635 0 00:23:50.635 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:50.635 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:50.635 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:50.635 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:50.635 | select(.opcode=="crc32c") 00:23:50.635 | "\(.module_name) \(.executed)"' 00:23:50.635 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 310131 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 310131 ']' 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 310131 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310131 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310131' 00:23:50.893 killing process with pid 310131 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 310131 00:23:50.893 Received shutdown signal, test time was about 2.000000 seconds 00:23:50.893 00:23:50.893 Latency(us) 00:23:50.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.893 =================================================================================================================== 00:23:50.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.893 20:22:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 310131 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=310553 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 310553 /var/tmp/bperf.sock 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 310553 ']' 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:51.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.151 20:22:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:51.151 [2024-05-16 20:22:38.279926] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:51.151 [2024-05-16 20:22:38.280001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310553 ] 00:23:51.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:51.151 Zero copy mechanism will not be used. 00:23:51.409 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.409 [2024-05-16 20:22:38.340865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.409 [2024-05-16 20:22:38.455584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.344 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.344 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:52.344 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:52.344 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:52.344 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:52.602 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:52.602 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:52.860 nvme0n1 00:23:52.860 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:52.860 20:22:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:53.118 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:53.118 Zero copy mechanism will not be used. 00:23:53.118 Running I/O for 2 seconds... 00:23:55.018 00:23:55.018 Latency(us) 00:23:55.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.018 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:55.018 nvme0n1 : 2.00 5647.60 705.95 0.00 0.00 2828.10 755.48 11165.39 00:23:55.018 =================================================================================================================== 00:23:55.018 Total : 5647.60 705.95 0.00 0.00 2828.10 755.48 11165.39 00:23:55.018 0 00:23:55.018 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:55.018 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:55.018 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:55.018 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:55.018 | select(.opcode=="crc32c") 00:23:55.018 | "\(.module_name) \(.executed)"' 00:23:55.018 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:55.275 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:55.275 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:55.275 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 310553 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 310553 ']' 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 310553 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310553 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310553' 00:23:55.276 killing process with pid 310553 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 310553 00:23:55.276 Received shutdown signal, test time was about 2.000000 seconds 00:23:55.276 00:23:55.276 Latency(us) 00:23:55.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.276 =================================================================================================================== 00:23:55.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.276 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 310553 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=311084 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 311084 /var/tmp/bperf.sock 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 311084 ']' 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:55.531 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.532 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:55.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:55.532 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:55.532 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:55.789 [2024-05-16 20:22:42.683591] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:55.789 [2024-05-16 20:22:42.683665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311084 ] 00:23:55.789 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.789 [2024-05-16 20:22:42.742495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.789 [2024-05-16 20:22:42.850421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.789 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.789 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:55.789 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:55.789 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:55.789 20:22:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:56.354 20:22:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.354 20:22:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.611 nvme0n1 00:23:56.611 20:22:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:56.611 20:22:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:56.869 Running I/O for 2 seconds... 00:23:58.767 00:23:58.767 Latency(us) 00:23:58.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.767 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:58.767 nvme0n1 : 2.01 18258.94 71.32 0.00 0.00 6992.84 2888.44 9806.13 00:23:58.767 =================================================================================================================== 00:23:58.767 Total : 18258.94 71.32 0.00 0.00 6992.84 2888.44 9806.13 00:23:58.767 0 00:23:58.767 20:22:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:58.767 20:22:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:58.767 20:22:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:58.767 20:22:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:58.767 | select(.opcode=="crc32c") 00:23:58.767 | "\(.module_name) \(.executed)"' 00:23:58.767 20:22:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 311084 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 311084 ']' 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 311084 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311084 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311084' 00:23:59.025 killing process with pid 311084 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 311084 00:23:59.025 Received shutdown signal, test time was about 2.000000 seconds 00:23:59.025 00:23:59.025 Latency(us) 00:23:59.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.025 =================================================================================================================== 00:23:59.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.025 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 311084 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=311497 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 311497 /var/tmp/bperf.sock 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 311497 ']' 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:59.284 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:59.284 [2024-05-16 20:22:46.389658] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:59.284 [2024-05-16 20:22:46.389732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311497 ] 00:23:59.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:59.284 Zero copy mechanism will not be used. 00:23:59.284 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.542 [2024-05-16 20:22:46.450678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.542 [2024-05-16 20:22:46.568981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.542 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:59.542 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:59.542 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:59.542 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:59.542 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:59.800 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:59.800 20:22:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.365 nvme0n1 00:24:00.365 20:22:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:00.365 20:22:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:00.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:00.365 Zero copy mechanism will not be used. 00:24:00.365 Running I/O for 2 seconds... 00:24:02.263 00:24:02.263 Latency(us) 00:24:02.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:02.263 nvme0n1 : 2.00 6150.75 768.84 0.00 0.00 2593.73 2111.72 6262.33 00:24:02.263 =================================================================================================================== 00:24:02.263 Total : 6150.75 768.84 0.00 0.00 2593.73 2111.72 6262.33 00:24:02.263 0 00:24:02.521 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:02.521 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:02.521 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:02.521 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:02.521 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:02.521 | select(.opcode=="crc32c") 00:24:02.521 | "\(.module_name) \(.executed)"' 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 311497 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 311497 ']' 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 311497 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311497 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311497' 00:24:02.780 killing process with pid 311497 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 311497 00:24:02.780 Received shutdown signal, test time was about 2.000000 seconds 00:24:02.780 00:24:02.780 Latency(us) 00:24:02.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.780 =================================================================================================================== 00:24:02.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.780 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 311497 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 310110 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 310110 ']' 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 310110 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310110 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310110' 00:24:03.038 killing process with pid 310110 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 310110 00:24:03.038 [2024-05-16 20:22:49.998698] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:03.038 20:22:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 310110 00:24:03.297 00:24:03.297 real 0m16.134s 00:24:03.297 user 0m32.266s 00:24:03.297 sys 0m4.282s 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:03.297 ************************************ 00:24:03.297 END TEST nvmf_digest_clean 00:24:03.297 ************************************ 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.297 ************************************ 00:24:03.297 START TEST nvmf_digest_error 00:24:03.297 ************************************ 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=312052 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 312052 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 312052 ']' 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.297 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:03.297 [2024-05-16 20:22:50.375776] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:03.297 [2024-05-16 20:22:50.375898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.297 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.297 [2024-05-16 20:22:50.441729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.555 [2024-05-16 20:22:50.547145] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.555 [2024-05-16 20:22:50.547200] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.555 [2024-05-16 20:22:50.547228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.555 [2024-05-16 20:22:50.547239] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.555 [2024-05-16 20:22:50.547248] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.555 [2024-05-16 20:22:50.547275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:03.555 [2024-05-16 20:22:50.607804] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.555 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:03.812 null0 00:24:03.812 [2024-05-16 20:22:50.721137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.812 [2024-05-16 20:22:50.745121] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:03.812 [2024-05-16 20:22:50.745395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=312078 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 312078 /var/tmp/bperf.sock 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 312078 ']' 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:03.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.812 20:22:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:03.812 [2024-05-16 20:22:50.790354] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:03.812 [2024-05-16 20:22:50.790432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312078 ] 00:24:03.812 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.812 [2024-05-16 20:22:50.851732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.069 [2024-05-16 20:22:50.968508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.069 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:04.069 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:04.069 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:04.069 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:04.326 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:04.326 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.326 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:04.327 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.327 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.327 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.892 nvme0n1 00:24:04.892 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:04.892 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.892 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:04.892 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.892 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:04.892 20:22:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:04.892 Running I/O for 2 seconds... 00:24:04.892 [2024-05-16 20:22:51.923583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:51.923637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:51.923661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:51.940488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:51.940525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:51.940545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:51.954713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:51.954749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:51.954769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:51.969048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:51.969078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:51.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:51.981196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:51.981230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:51.981249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:51.994633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:51.994667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:51.994686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:52.007585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:52.007618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:52.007637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:52.021760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:52.021794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:52.021813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.892 [2024-05-16 20:22:52.034591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:04.892 [2024-05-16 20:22:52.034625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.892 [2024-05-16 20:22:52.034644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.049085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.049114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.049145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.062585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.062618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.062637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.075736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.075768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.075787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.089661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.089695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.089713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.103366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.103399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.103424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.116736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.116770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.116788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.130113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.130142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.130175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.144283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.144317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.144336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.157475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.157508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.150 [2024-05-16 20:22:52.157527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.150 [2024-05-16 20:22:52.170918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.150 [2024-05-16 20:22:52.170946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.170977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.184206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.184240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.184258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.197531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.197565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.197584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.212347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.212384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.212404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.226256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.226296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.226316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.238830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.238874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.238918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.252192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.252241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.252259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.269029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.269059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.269076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.151 [2024-05-16 20:22:52.280869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.151 [2024-05-16 20:22:52.280902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.151 [2024-05-16 20:22:52.280935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.302630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.302664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.302682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.314729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.314762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.328557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.328591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.328610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.342944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.342986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.343003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.356266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.356300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.356319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.370124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.370172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.370191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.384877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.384921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.384937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.399723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.399757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.399776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.412063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.412091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.412122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.426380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.426414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.426432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.439070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.439096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.439127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.452773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.452806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.452824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.465997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.466026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.466063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.481953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.409 [2024-05-16 20:22:52.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.409 [2024-05-16 20:22:52.482014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.409 [2024-05-16 20:22:52.497259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.410 [2024-05-16 20:22:52.497294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.410 [2024-05-16 20:22:52.497313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.410 [2024-05-16 20:22:52.509226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.410 [2024-05-16 20:22:52.509260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.410 [2024-05-16 20:22:52.509278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.410 [2024-05-16 20:22:52.521966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.410 [2024-05-16 20:22:52.521994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.410 [2024-05-16 20:22:52.522024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.410 [2024-05-16 20:22:52.536713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.410 [2024-05-16 20:22:52.536747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.410 [2024-05-16 20:22:52.536765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.410 [2024-05-16 20:22:52.552337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.410 [2024-05-16 20:22:52.552367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.410 [2024-05-16 20:22:52.552383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.565002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.565044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.565060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.577836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.577876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.577896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.591461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.591500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.591520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.605082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.605109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.605139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.618385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.618419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.618438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.631729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.631761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.631780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.645487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.645521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.645539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.658707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.658740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.658759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.674533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.674565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-05-16 20:22:52.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.668 [2024-05-16 20:22:52.691995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.668 [2024-05-16 20:22:52.692038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.692055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.703793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.703826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.703844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.718230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.718263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.718282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.736046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.736075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.736107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.750709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.750741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.750760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.762746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.762779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.762798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.779941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.779969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.779999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.796006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.796034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.796063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.669 [2024-05-16 20:22:52.811494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.669 [2024-05-16 20:22:52.811540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.669 [2024-05-16 20:22:52.811559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.825584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.825617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.825636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.837493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.837532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.837551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.851393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.851426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.851444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.864167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.864211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.864229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.877737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.877770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.877789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.891711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.891743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.891762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.906576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.906609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.906628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.920089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.920116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.920147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.933836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.933876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.933910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.948377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.927 [2024-05-16 20:22:52.948411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.927 [2024-05-16 20:22:52.948430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.927 [2024-05-16 20:22:52.960748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:52.960781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:52.960800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:52.976533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:52.976567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:52.976585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:52.991480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:52.991520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:52.991540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:53.006167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:53.006215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:53.006234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:53.018096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:53.018126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:53.018142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:53.032969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:53.032997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:53.033028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:53.050609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:53.050643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:53.050662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.928 [2024-05-16 20:22:53.068310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:05.928 [2024-05-16 20:22:53.068346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.928 [2024-05-16 20:22:53.068366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.079595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.079630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.079656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.096301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.096335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.096354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.111970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.111999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.112032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.123839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.123880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.123913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.139481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.139515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.139533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.155742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.155775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.155793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.168148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.168193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.168212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.182048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.182076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.182107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.193903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.193931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.193961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.208882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.208928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.208944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.222705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.222738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.222756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.235751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.235783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.235801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.248616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.248650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.248668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.262838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.262882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.262902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.275094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.275125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.275156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.289598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.289631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.289650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.303668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.303702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.303720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.186 [2024-05-16 20:22:53.317335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.186 [2024-05-16 20:22:53.317369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.186 [2024-05-16 20:22:53.317389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.331018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.331047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.331080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.347553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.347588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.347606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.363748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.363781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.363800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.376728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.376762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.376781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.390097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.390127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.390161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.403447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.403481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.403500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.416760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.416794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.416812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.430062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.444 [2024-05-16 20:22:53.430090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.444 [2024-05-16 20:22:53.430121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.444 [2024-05-16 20:22:53.443296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.443329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.443354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.456572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.456605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.456623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.470017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.470046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.470077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.485244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.485279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.485297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.497061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.497088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.497118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.511738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.511771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.511790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.526102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.526148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.526168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.539291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.539324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.539343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.552547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.552580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.552599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.565734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.565773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.565792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.445 [2024-05-16 20:22:53.579579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.445 [2024-05-16 20:22:53.579612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.445 [2024-05-16 20:22:53.579631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.593389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.593422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.593441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.604709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.604742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.604760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.619445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.619478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.619497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.634605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.634639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.634658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.647618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.647651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.647669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.663806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.663839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.663865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.676745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.676777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.676795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.688580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.688613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.688632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.702557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.702591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.702610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.716461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.716494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.716513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.730451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.730484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.730502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.741737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.741765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.741795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.757368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.757400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.757419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.771121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.771149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.771182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.785297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.785329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.785347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.798515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.798548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.798574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.813183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.813229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.813247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.825070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.825097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.825127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.701 [2024-05-16 20:22:53.841235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.701 [2024-05-16 20:22:53.841268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.701 [2024-05-16 20:22:53.841287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.959 [2024-05-16 20:22:53.855190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.959 [2024-05-16 20:22:53.855223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.959 [2024-05-16 20:22:53.855242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.959 [2024-05-16 20:22:53.868706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.959 [2024-05-16 20:22:53.868738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.959 [2024-05-16 20:22:53.868756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.959 [2024-05-16 20:22:53.882415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.959 [2024-05-16 20:22:53.882449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.959 [2024-05-16 20:22:53.882467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.959 [2024-05-16 20:22:53.897918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x943400) 00:24:06.959 [2024-05-16 20:22:53.897946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.959 [2024-05-16 20:22:53.897961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.959 00:24:06.959 Latency(us) 00:24:06.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:06.959 nvme0n1 : 2.00 18185.01 71.04 0.00 0.00 7030.46 3592.34 20777.34 00:24:06.959 =================================================================================================================== 00:24:06.959 Total : 18185.01 71.04 0.00 0.00 7030.46 3592.34 20777.34 00:24:06.959 0 00:24:06.959 20:22:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:06.959 20:22:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:06.959 20:22:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:06.959 20:22:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:06.959 | .driver_specific 00:24:06.959 | .nvme_error 00:24:06.959 | .status_code 00:24:06.959 | .command_transient_transport_error' 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 312078 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 312078 ']' 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 312078 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 312078 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 312078' 00:24:07.217 killing process with pid 312078 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 312078 00:24:07.217 Received shutdown signal, test time was about 2.000000 seconds 00:24:07.217 00:24:07.217 Latency(us) 00:24:07.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.217 =================================================================================================================== 00:24:07.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.217 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 312078 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=312538 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 312538 /var/tmp/bperf.sock 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 312538 ']' 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:07.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.475 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.475 [2024-05-16 20:22:54.496416] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:07.475 [2024-05-16 20:22:54.496501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312538 ] 00:24:07.475 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:07.475 Zero copy mechanism will not be used. 00:24:07.475 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.475 [2024-05-16 20:22:54.561760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.732 [2024-05-16 20:22:54.679321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.732 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:07.732 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:07.732 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:07.732 20:22:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:07.989 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:07.989 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.989 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.989 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.989 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.989 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.555 nvme0n1 00:24:08.555 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:08.555 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.555 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.555 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.555 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:08.555 20:22:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.555 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:08.555 Zero copy mechanism will not be used. 00:24:08.555 Running I/O for 2 seconds... 00:24:08.555 [2024-05-16 20:22:55.578365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.578417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.578441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.584597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.584634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.584655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.590902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.590940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.590958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.596650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.596686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.596706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.602136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.602194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.602211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.608819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.608864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.608899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.614441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.614477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.614496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.620033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.620065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.620083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.625839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.625883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.625928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.631412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.631448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.631467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.637923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.637969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.637990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.643827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.643876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.649652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.555 [2024-05-16 20:22:55.649687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-05-16 20:22:55.649706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.555 [2024-05-16 20:22:55.654812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.556 [2024-05-16 20:22:55.654847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-05-16 20:22:55.654876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.556 [2024-05-16 20:22:55.660544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.556 [2024-05-16 20:22:55.660578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-05-16 20:22:55.660597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.556 [2024-05-16 20:22:55.667756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.556 [2024-05-16 20:22:55.667790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-05-16 20:22:55.667809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.556 [2024-05-16 20:22:55.676115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.556 [2024-05-16 20:22:55.676160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-05-16 20:22:55.676178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.556 [2024-05-16 20:22:55.683923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.556 [2024-05-16 20:22:55.683969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-05-16 20:22:55.683986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.556 [2024-05-16 20:22:55.692165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.556 [2024-05-16 20:22:55.692214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-05-16 20:22:55.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.700408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.700449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.700469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.708631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.708667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.708686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.714693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.714727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.714746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.719475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.719509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.719528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.724273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.724307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.724325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.727712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.727745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.727763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.731600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.731633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.731652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.735829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.735872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.735908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.739114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.739143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.739160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.743196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.743240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.743257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.746966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.746996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.747012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.750119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.750148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.750181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.755041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.755073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.755090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.815 [2024-05-16 20:22:55.761004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.815 [2024-05-16 20:22:55.761036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.815 [2024-05-16 20:22:55.761053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.767168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.767199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.767215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.771836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.771880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.771925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.779128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.779160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.779192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.786295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.786329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.794812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.794846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.794875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.801012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.801043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.801060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.807393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.807428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.807447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.812989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.813020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.813052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.819161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.819207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.819226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.825181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.825211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.825229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.831065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.831095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.831112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.837038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.837069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.837086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.843069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.843100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.843117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.849061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.849093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.849110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.855159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.855190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.855224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.861056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.861101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.866713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.866747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.866765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.873214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.873249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.873268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.816 [2024-05-16 20:22:55.878549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.816 [2024-05-16 20:22:55.878584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.816 [2024-05-16 20:22:55.878603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.883562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.883595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.883614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.888582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.888641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.893657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.893692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.893711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.898655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.898689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.898708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.903459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.903494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.903513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.906956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.906986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.907002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.910757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.910792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.910813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.914753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.914786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.914804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.917972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.918001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.918018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.921826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.921865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.921886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.925951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.925989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.926007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.929439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.929472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.929490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.932782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.932815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.937208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.937242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.937260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.941986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.942034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.946107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.946140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.946176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.952791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.952822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.952840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.817 [2024-05-16 20:22:55.958347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:08.817 [2024-05-16 20:22:55.958378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.817 [2024-05-16 20:22:55.958396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.964154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.964185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.964202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.970135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.970197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.975758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.975789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.975806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.981748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.981778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.981811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.987269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.987300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.987317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.992323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.992354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.992372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:55.997693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:55.997724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:55.997741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.003200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.003231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.003263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.009067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.009098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.009115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.014253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.014284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.014308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.019380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.019428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.025977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.026008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.026024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.031932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.031963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.031980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.037243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.037274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.037291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.043246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.043277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.049212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.049243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.049276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.054719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.054750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.054781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.059557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.059588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.059605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.064899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.064934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.064967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.071215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.071246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.071263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.076036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.076067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.076084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.080590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.080621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.080651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.077 [2024-05-16 20:22:56.086484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.077 [2024-05-16 20:22:56.086514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.077 [2024-05-16 20:22:56.086531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.091599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.091630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.091647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.096877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.096908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.096925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.102612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.102643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.102659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.109863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.109894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.109911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.117205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.117254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.125074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.125105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.125123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.133235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.133267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.133299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.140859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.140891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.140908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.146517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.146550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.146582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.152064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.152095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.152112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.157968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.158000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.158017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.163628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.163659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.163678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.169817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.169849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.169883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.175731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.175762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.175779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.182160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.182191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.182208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.187741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.187772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.187789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.193728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.193759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.193776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.199447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.199478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.199495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.204625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.204656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.204673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.209676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.209707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.209724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.212717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.212746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.212763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.078 [2024-05-16 20:22:56.217087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.078 [2024-05-16 20:22:56.217117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.078 [2024-05-16 20:22:56.217135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.337 [2024-05-16 20:22:56.222268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.337 [2024-05-16 20:22:56.222299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.337 [2024-05-16 20:22:56.222316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.337 [2024-05-16 20:22:56.226747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.337 [2024-05-16 20:22:56.226791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.337 [2024-05-16 20:22:56.226808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.231158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.231188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.231205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.235555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.235584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.235616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.240024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.240053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.240069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.244488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.244517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.244534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.249784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.249812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.249844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.254009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.254039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.254062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.258846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.258898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.258915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.263568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.263598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.263614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.268130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.268175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.268191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.272728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.272772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.272789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.277355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.277384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.277400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.283520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.283568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.283586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.288494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.288525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.288542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.293416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.293446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.293478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.298025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.298060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.298078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.302991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.303027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.303044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.308075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.308106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.308123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.312781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.312811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.312828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.317401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.317430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.317462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.322700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.322729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.322746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.326789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.326819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.326835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.332317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.332347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.332364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.337341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.337371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.337389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.342816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.342848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.342874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.347665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.347711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.347728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.354865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.354913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.361770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.338 [2024-05-16 20:22:56.361819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.338 [2024-05-16 20:22:56.368990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.338 [2024-05-16 20:22:56.369023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.369040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.377103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.377135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.377153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.383991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.384024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.384041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.391843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.391902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.400135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.400167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.400191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.408030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.408062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.408089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.415994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.416026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.416043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.423757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.423790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.423807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.430781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.430814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.430831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.436903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.436934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.436951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.442655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.442687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.442704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.448147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.448179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.448197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.452123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.452154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.452172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.456675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.456706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.456724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.462345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.462375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.462409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.468276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.468323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.468347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.473643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.473689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.339 [2024-05-16 20:22:56.479395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.339 [2024-05-16 20:22:56.479427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.339 [2024-05-16 20:22:56.479445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.484835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.484904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.484924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.490788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.490819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.490851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.496863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.496894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.496911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.502746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.502777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.502804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.508907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.508955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.508972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.515438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.515469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.515486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.523099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.523132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.523149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.530677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.598 [2024-05-16 20:22:56.530709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-05-16 20:22:56.530726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.598 [2024-05-16 20:22:56.536730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.536759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.536776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.543182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.543213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.543244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.549188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.549220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.549237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.553930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.553960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.553978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.559011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.559048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.559066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.564572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.564620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.564637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.570074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.570105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.570123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.575927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.575958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.575975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.581868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.581899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.581916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.587765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.587797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.587814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.593080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.593111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.593129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.598543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.598575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.598592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.604511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.604542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.604559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.610353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.610385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.610402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.616150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.616182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.616200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.622070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.622101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.622118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.627545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.627577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.627594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.632887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.632919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.632936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.638256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.638287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.638304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.643717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.643764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.643781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.649467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.649498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.649515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.654781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.654851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.659228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.659259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.659276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.664211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.664243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.664260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.669102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.669136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.669153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.674235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.674266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.674284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.679741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.679773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.679790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.685490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.599 [2024-05-16 20:22:56.685522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-05-16 20:22:56.685539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.599 [2024-05-16 20:22:56.691129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.691161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.691178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.697929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.697961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.697977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.703390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.703427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.703445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.709132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.709163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.709180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.715195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.715227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.715245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.721016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.721047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.721064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.727017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.727049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.727066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.732742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.732775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.732792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.600 [2024-05-16 20:22:56.738391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.600 [2024-05-16 20:22:56.738423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-05-16 20:22:56.738440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.859 [2024-05-16 20:22:56.744461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.859 [2024-05-16 20:22:56.744494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-05-16 20:22:56.744511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.859 [2024-05-16 20:22:56.749937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.859 [2024-05-16 20:22:56.749969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-05-16 20:22:56.749986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.859 [2024-05-16 20:22:56.755224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.859 [2024-05-16 20:22:56.755256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-05-16 20:22:56.755273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.859 [2024-05-16 20:22:56.758248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.859 [2024-05-16 20:22:56.758278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-05-16 20:22:56.758296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.859 [2024-05-16 20:22:56.763459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.859 [2024-05-16 20:22:56.763491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-05-16 20:22:56.763508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.768934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.768966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.768983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.774436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.774468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.774485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.779711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.779757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.779774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.784773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.784804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.784821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.790104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.790136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.790153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.796130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.796162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.796186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.802869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.802901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.809432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.809463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.809480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.815222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.815254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.815271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.818193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.818223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.818241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.822655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.822686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.822703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.828098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.828144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.828160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.833736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.833768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.833785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.838534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.838565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.838582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.843474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.843511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.843528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.849110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.849141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.849159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.855210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.855240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.855257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.860640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.860686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.860703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.867234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.867265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.867283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.873447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.873478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.873495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.880206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.880237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.880254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.884719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.884751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.884772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.889891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.889922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.889945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.896068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.896098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.896116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.902485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.902517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.902538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.908453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.908488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.908507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.915139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.915171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.915203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.860 [2024-05-16 20:22:56.920909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.860 [2024-05-16 20:22:56.920940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-05-16 20:22:56.920957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.927074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.927119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.932868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.932902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.932934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.936738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.936773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.936792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.940640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.940679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.946136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.946184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.946204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.952125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.952174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.952191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.958285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.958320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.958339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.963869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.963917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.963934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.969567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.969602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.969620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.975673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.975708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.975727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.981445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.981480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.981500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.987461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.987496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.987515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.993528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.993562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.993581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.861 [2024-05-16 20:22:56.999596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:09.861 [2024-05-16 20:22:56.999627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-05-16 20:22:56.999644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.004961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.004992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.005010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.010520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.010554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.016730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.016765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.016783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.022842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.022887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.022921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.030450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.030483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.030503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.038517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.038552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.038571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.047226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.047260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.047286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.055219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.055254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.055273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.063038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.063069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.063086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.070985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.071017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.071034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.078726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.078760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.078779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.086385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.086420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.086439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.094066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.094097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.094115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.101824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.101868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.101904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.109561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.109597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.109617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.117269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.117309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.117329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.124944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.124976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.124993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.132580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.132646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.140447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.140481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.140500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.148269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.148303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.148322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.154361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.154395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.154413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.159428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.159461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.159479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.164965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.164997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.165014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.170924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.170955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.170971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.176874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.176909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.176942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.182628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.121 [2024-05-16 20:22:57.182662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.121 [2024-05-16 20:22:57.182681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.121 [2024-05-16 20:22:57.188243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.188277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.188296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.191976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.192008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.192026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.195689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.195722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.195740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.199222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.199259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.199279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.202228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.202260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.202279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.205789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.205822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.210486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.210544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.215355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.215388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.215407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.220197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.220244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.220262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.225033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.225063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.225080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.229840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.229881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.229915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.234792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.234824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.234842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.239696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.239730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.239749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.244664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.244697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.244716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.249618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.249650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.249668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.254693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.254726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.254745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.122 [2024-05-16 20:22:57.260232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.122 [2024-05-16 20:22:57.260266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.122 [2024-05-16 20:22:57.260285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.266236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.266270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.266289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.272574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.272608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.272627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.278845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.278899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.278916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.285211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.285246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.285265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.291271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.291305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.291324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.297477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.297512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.297531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.382 [2024-05-16 20:22:57.303284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.382 [2024-05-16 20:22:57.303319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.382 [2024-05-16 20:22:57.303344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.309502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.309538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.309557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.315661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.315695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.315714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.321675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.321709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.321728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.327475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.327510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.327529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.333424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.333458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.333476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.339702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.339737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.339756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.345640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.345674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.345693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.351730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.351764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.351783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.357927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.357963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.357981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.365284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.365319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.365338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.373475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.373510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.373529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.381035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.381067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.385519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.385557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.385578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.389815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.389851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.389880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.395912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.395943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.395961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.403375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.403410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.403429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.410217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.410251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.410270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.416123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.416167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.421831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.421874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.421907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.427900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.427929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.427959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.433576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.433610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.433628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.438573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.438607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.438626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.443640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.443673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.443692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.448559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.448592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.448610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.453232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.453265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.453284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.458991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.383 [2024-05-16 20:22:57.459020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.383 [2024-05-16 20:22:57.459042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.383 [2024-05-16 20:22:57.466234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.466268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.466287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.473398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.473432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.473451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.479399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.479433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.479451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.485481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.485516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.485534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.491444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.491477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.491496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.497454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.497485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.497502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.502484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.502517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.502536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.507375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.507408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.507426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.512264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.512304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.512323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.517809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.517839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.517861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.384 [2024-05-16 20:22:57.524564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.384 [2024-05-16 20:22:57.524600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.384 [2024-05-16 20:22:57.524619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.642 [2024-05-16 20:22:57.532345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.642 [2024-05-16 20:22:57.532381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.642 [2024-05-16 20:22:57.532401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.642 [2024-05-16 20:22:57.538393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.642 [2024-05-16 20:22:57.538427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.642 [2024-05-16 20:22:57.538446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.642 [2024-05-16 20:22:57.544497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.642 [2024-05-16 20:22:57.544531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.642 [2024-05-16 20:22:57.544550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.642 [2024-05-16 20:22:57.550377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.642 [2024-05-16 20:22:57.550412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.642 [2024-05-16 20:22:57.550432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.643 [2024-05-16 20:22:57.556567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.643 [2024-05-16 20:22:57.556601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.643 [2024-05-16 20:22:57.556620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.643 [2024-05-16 20:22:57.563066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.643 [2024-05-16 20:22:57.563096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.643 [2024-05-16 20:22:57.563113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.643 [2024-05-16 20:22:57.569923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.643 [2024-05-16 20:22:57.569954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.643 [2024-05-16 20:22:57.569970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.643 [2024-05-16 20:22:57.576239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1820190) 00:24:10.643 [2024-05-16 20:22:57.576275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.643 [2024-05-16 20:22:57.576302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.643 00:24:10.643 Latency(us) 00:24:10.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:10.643 nvme0n1 : 2.00 5409.22 676.15 0.00 0.00 2952.96 719.08 9077.95 00:24:10.643 =================================================================================================================== 00:24:10.643 Total : 5409.22 676.15 0.00 0.00 2952.96 719.08 9077.95 00:24:10.643 0 00:24:10.643 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:10.643 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:10.643 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:10.643 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:10.643 | .driver_specific 00:24:10.643 | .nvme_error 00:24:10.643 | .status_code 00:24:10.643 | .command_transient_transport_error' 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 349 > 0 )) 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 312538 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 312538 ']' 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 312538 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 312538 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 312538' 00:24:10.901 killing process with pid 312538 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 312538 00:24:10.901 Received shutdown signal, test time was about 2.000000 seconds 00:24:10.901 00:24:10.901 Latency(us) 00:24:10.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.901 =================================================================================================================== 00:24:10.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.901 20:22:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 312538 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=313008 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 313008 /var/tmp/bperf.sock 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 313008 ']' 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:11.158 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.158 [2024-05-16 20:22:58.160183] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:11.158 [2024-05-16 20:22:58.160262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313008 ] 00:24:11.158 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.158 [2024-05-16 20:22:58.217922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.416 [2024-05-16 20:22:58.326941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.416 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.416 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:11.416 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:11.416 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:11.674 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:11.674 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.674 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.674 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.674 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.674 20:22:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.932 nvme0n1 00:24:11.932 20:22:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:11.933 20:22:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.933 20:22:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.933 20:22:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.933 20:22:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:11.933 20:22:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.191 Running I/O for 2 seconds... 00:24:12.191 [2024-05-16 20:22:59.195844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.196236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.196280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.210098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.210370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.210404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.224380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.224644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.224676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.238633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.238904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.238932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.252873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.253129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.267049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.267346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.281177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.281450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.281481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.295348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.295607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.295638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.309489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.309752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.309783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.191 [2024-05-16 20:22:59.323767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.191 [2024-05-16 20:22:59.324059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.191 [2024-05-16 20:22:59.324087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.338274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.338538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.338569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.352480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.352741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.352772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.366658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.366948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.366976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.380860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.381133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.381161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.394947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.395259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.395289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.409271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.409528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.409559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.423335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.423595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.423634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.437470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.437730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.437760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.451588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.451865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.451896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.465668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.466000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.466028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.479740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.480083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.480111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.493812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.494111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.450 [2024-05-16 20:22:59.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.450 [2024-05-16 20:22:59.508007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.450 [2024-05-16 20:22:59.508275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.508305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.451 [2024-05-16 20:22:59.522088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.451 [2024-05-16 20:22:59.522363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.522393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.451 [2024-05-16 20:22:59.536086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.451 [2024-05-16 20:22:59.536356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.536386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.451 [2024-05-16 20:22:59.550049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.451 [2024-05-16 20:22:59.550324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.550354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.451 [2024-05-16 20:22:59.564260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.451 [2024-05-16 20:22:59.564520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.564551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.451 [2024-05-16 20:22:59.578425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.451 [2024-05-16 20:22:59.578685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.578716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.451 [2024-05-16 20:22:59.592713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.451 [2024-05-16 20:22:59.593020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.451 [2024-05-16 20:22:59.593048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.709 [2024-05-16 20:22:59.607141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.709 [2024-05-16 20:22:59.607405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.709 [2024-05-16 20:22:59.607435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.709 [2024-05-16 20:22:59.621182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.621452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.621482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.635322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.635582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.635612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.649463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.649739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.649769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.663599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.663865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.663909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.677685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.678011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.678039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.691804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.692092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.692119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.705938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.706229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.706260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.719967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.720236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.720266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.734061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.734351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.734381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.748313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.748572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.748602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.762431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.762690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.762720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.776462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.776694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.776725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.790586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.790818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.790861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.804712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.804973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.805001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.818788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.819036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.819063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.832020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.832250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.832276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.710 [2024-05-16 20:22:59.845423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.710 [2024-05-16 20:22:59.845636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.710 [2024-05-16 20:22:59.845678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.859172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.859402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.859429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.872778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.873019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.873048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.886216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.886445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.886474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.899759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.900005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.900048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.913408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.913628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.913656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.926825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.927045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.927073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.940299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.940528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.940555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.953941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.954172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.954201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.967614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.967830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.967871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.980651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.980872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.980899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:22:59.994040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:22:59.994268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:22:59.994294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.007514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.007739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.007774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.020935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.021154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.021185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.034295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.034525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.034554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.047651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.047873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.047904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.061387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.061638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.061681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.074773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.074997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.075025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.087986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.088225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.088279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.969 [2024-05-16 20:23:00.101156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:12.969 [2024-05-16 20:23:00.101370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.969 [2024-05-16 20:23:00.101397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.228 [2024-05-16 20:23:00.114435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.228 [2024-05-16 20:23:00.114657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.228 [2024-05-16 20:23:00.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.228 [2024-05-16 20:23:00.127683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.228 [2024-05-16 20:23:00.127918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.228 [2024-05-16 20:23:00.127945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.228 [2024-05-16 20:23:00.140886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.228 [2024-05-16 20:23:00.141097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.228 [2024-05-16 20:23:00.141125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.228 [2024-05-16 20:23:00.154243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.228 [2024-05-16 20:23:00.154467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.228 [2024-05-16 20:23:00.154495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.228 [2024-05-16 20:23:00.167584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.228 [2024-05-16 20:23:00.167812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.167868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.181062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.181276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.181318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.194507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.194732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.194759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.207781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.208000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.208026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.220969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.221179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.233837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.234057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.234084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.247091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.247320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.260507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.260751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.260798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.273990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.274201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.274228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.287273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.287530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.287557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.300746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.300968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.300995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.314173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.314400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.314443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.327640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.327886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.327913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.340950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.341185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.341226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.354362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.354589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.354615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.229 [2024-05-16 20:23:00.367728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.229 [2024-05-16 20:23:00.367987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.229 [2024-05-16 20:23:00.368015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.381506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.488 [2024-05-16 20:23:00.381742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.488 [2024-05-16 20:23:00.381768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.395041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.488 [2024-05-16 20:23:00.395255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.488 [2024-05-16 20:23:00.395283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.408671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.488 [2024-05-16 20:23:00.408920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.488 [2024-05-16 20:23:00.408947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.422057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.488 [2024-05-16 20:23:00.422285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.488 [2024-05-16 20:23:00.422312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.435522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.488 [2024-05-16 20:23:00.435735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.488 [2024-05-16 20:23:00.435777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.448840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.488 [2024-05-16 20:23:00.449063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.488 [2024-05-16 20:23:00.449090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.488 [2024-05-16 20:23:00.462398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.462625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.462651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.475774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.475995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.476022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.488964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.489191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.489232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.502361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.502586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.502612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.515953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.516177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.516203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.529359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.529584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.529609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.542882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.543105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.556439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.556663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.556689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.569715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.569961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.569991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.583186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.583411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.583437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.596789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.597009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.597036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.610263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.610473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.610505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.489 [2024-05-16 20:23:00.623792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.489 [2024-05-16 20:23:00.624040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.489 [2024-05-16 20:23:00.624068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.637483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.637711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.637737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.651038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.651261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.651287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.664475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.664705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.664733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.677920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.678165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.678193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.691449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.691673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.691699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.704961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.705185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.705212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.718419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.718644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.718671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.731965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.732177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.732209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.745007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.745248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.745289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.758530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.758757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.758783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.772015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.772241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.772267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.785463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.785689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.785715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.798961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.799194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.799221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.812638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.812932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.826334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.826564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.826592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.839927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.840150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.840180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.853602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.853868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.867300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.867539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.867581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.748 [2024-05-16 20:23:00.881052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:13.748 [2024-05-16 20:23:00.881281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.748 [2024-05-16 20:23:00.881308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.895002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.895233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.895260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.908720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.908943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.908970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.922236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.922466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.922493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.935977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.936205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.936231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.949968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.950266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.950296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.964281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.964545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.964576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.978589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.978850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.978889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:00.992952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:00.993208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.007 [2024-05-16 20:23:00.993239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.007 [2024-05-16 20:23:01.006922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.007 [2024-05-16 20:23:01.007201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.007232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.021034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.021297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.021327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.035179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.035438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.035472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.049345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.049607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.049639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.063476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.063735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.063766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.077689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.078017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.078046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.091762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.092081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.092114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.105910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.106258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.106292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.120134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.120403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.120436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.134260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.134527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.134559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.008 [2024-05-16 20:23:01.148420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.008 [2024-05-16 20:23:01.148691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.008 [2024-05-16 20:23:01.148722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.265 [2024-05-16 20:23:01.162835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.265 [2024-05-16 20:23:01.163166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.265 [2024-05-16 20:23:01.163210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.266 [2024-05-16 20:23:01.176989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13caec0) with pdu=0x2000190fd640 00:24:14.266 [2024-05-16 20:23:01.177252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.266 [2024-05-16 20:23:01.177282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.266 00:24:14.266 Latency(us) 00:24:14.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.266 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:14.266 nvme0n1 : 2.01 18539.45 72.42 0.00 0.00 6887.09 3980.71 14660.65 00:24:14.266 =================================================================================================================== 00:24:14.266 Total : 18539.45 72.42 0.00 0.00 6887.09 3980.71 14660.65 00:24:14.266 0 00:24:14.266 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:14.266 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:14.266 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:14.266 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:14.266 | .driver_specific 00:24:14.266 | .nvme_error 00:24:14.266 | .status_code 00:24:14.266 | .command_transient_transport_error' 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 313008 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 313008 ']' 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 313008 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 313008 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 313008' 00:24:14.524 killing process with pid 313008 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 313008 00:24:14.524 Received shutdown signal, test time was about 2.000000 seconds 00:24:14.524 00:24:14.524 Latency(us) 00:24:14.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.524 =================================================================================================================== 00:24:14.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.524 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 313008 00:24:14.783 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:14.783 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:14.783 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:14.783 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=313414 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 313414 /var/tmp/bperf.sock 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 313414 ']' 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:14.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.784 20:23:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.784 [2024-05-16 20:23:01.822120] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:14.784 [2024-05-16 20:23:01.822194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313414 ] 00:24:14.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.784 Zero copy mechanism will not be used. 00:24:14.784 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.784 [2024-05-16 20:23:01.882971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.042 [2024-05-16 20:23:01.996262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.042 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:15.042 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:15.042 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.042 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.300 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:15.300 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.300 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.300 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.300 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.300 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.558 nvme0n1 00:24:15.558 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:15.558 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.558 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.558 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.558 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:15.558 20:23:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.817 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:15.817 Zero copy mechanism will not be used. 00:24:15.817 Running I/O for 2 seconds... 00:24:15.817 [2024-05-16 20:23:02.819275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.819654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.819696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.825196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.825549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.825579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.831675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.831965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.831995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.838027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.838319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.838348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.844820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.845110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.845140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.852171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.852453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.852481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.859326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.859720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.859753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.865756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.866082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.866111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.872310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.872591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.817 [2024-05-16 20:23:02.872619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.817 [2024-05-16 20:23:02.878705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.817 [2024-05-16 20:23:02.878997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.879027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.885810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.885903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.885930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.893511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.893796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.893825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.899890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.900171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.900199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.906328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.906595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.906624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.912829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.913117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.913146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.919245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.919541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.919569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.925612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.925972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.926001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.933116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.933464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.933496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.939868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.940192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.940219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.945167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.945448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.945476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.950117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.950427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.950460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.955164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.955456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.955483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.818 [2024-05-16 20:23:02.960261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:15.818 [2024-05-16 20:23:02.960563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.818 [2024-05-16 20:23:02.960589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.965461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.965758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.965787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.970338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.970626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.970654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.975559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.975924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.975953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.981076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.981419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.981450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.986160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.986470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.986499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.991006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.991287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.991314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:02.995850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:02.996138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:02.996167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.000585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.000871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.000899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.005325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.005678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.005710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.010194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.010471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.010499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.014922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.019775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.020059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.020087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.024648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.024988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.025016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.029737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.030029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.030058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.034541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.034612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.034643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.040208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.040567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.040598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.045070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.045352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.045379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.049821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.050108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.050136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.054758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.055068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.055097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.059474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.059752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.064160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.064453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.064480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.068944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.069224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.069252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.073604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.073891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.073919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.078 [2024-05-16 20:23:03.078251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.078 [2024-05-16 20:23:03.078592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.078 [2024-05-16 20:23:03.078623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.082999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.083282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.083310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.087767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.088037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.088065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.092427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.092704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.092732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.097180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.097459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.097487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.101923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.102235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.102263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.106757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.107043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.107070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.111427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.111706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.111734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.116754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.117121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.117150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.122011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.122323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.122352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.126836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.127123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.127151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.131789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.132089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.132117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.137325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.137606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.137634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.142212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.142516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.142544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.146995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.147302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.147330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.151772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.152057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.152085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.156462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.161545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.161838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.161892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.166270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.166550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.172471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.172764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.172808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.179009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.179303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.185241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.185521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.185549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.191487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.191767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.191794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.197760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.198048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.198076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.203980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.204070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.204095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.211264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.211544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.211586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.079 [2024-05-16 20:23:03.218651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.079 [2024-05-16 20:23:03.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.079 [2024-05-16 20:23:03.219005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.226248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.226546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.226573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.233495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.233801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.233843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.240805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.241097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.241141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.247970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.248264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.248292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.255138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.255419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.255448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.262268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.262549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.262577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.268770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.269060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.269088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.274155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.274436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.274464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.278883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.279219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.284179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.284547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.284577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.290150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.290447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.290475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.296557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.296871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.296900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.303206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.303548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.303580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.309716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.310050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.310084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.316125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.316415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.316445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.322375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.322662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.322692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.328678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.339 [2024-05-16 20:23:03.328965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.339 [2024-05-16 20:23:03.329002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.339 [2024-05-16 20:23:03.335088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.335503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.335534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.341491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.341774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.341803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.347771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.348057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.348087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.354014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.354307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.360281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.360575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.360602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.366571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.366850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.366886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.372945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.373228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.373257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.380158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.380473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.380503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.386813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.387102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.387136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.393749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.394073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.394102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.400910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.401228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.401266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.406748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.407035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.407064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.411829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.412135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.412163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.416938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.417232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.417260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.422314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.422593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.422622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.426977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.427285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.431756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.432044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.432077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.436604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.436888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.436917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.441479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.441768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.441796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.446932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.447019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.447045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.453592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.453908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.453936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.460636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.460922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.460951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.467360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.467672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.467700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.473958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.474164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.474192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.340 [2024-05-16 20:23:03.481334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.340 [2024-05-16 20:23:03.481651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.340 [2024-05-16 20:23:03.481679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.488233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.488549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.488577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.495445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.495725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.495753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.502305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.502655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.502683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.509299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.509573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.509601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.516143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.516453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.516481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.523145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.523431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.523459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.529977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.530367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.530409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.536775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.537131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.537174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.543518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.543783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.543812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.548326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.548589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.600 [2024-05-16 20:23:03.548617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.600 [2024-05-16 20:23:03.552880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.600 [2024-05-16 20:23:03.553158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.553186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.557430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.557692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.557720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.562040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.562322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.562349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.566751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.567024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.567053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.571279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.571581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.576490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.576753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.576781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.581581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.581880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.586989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.587253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.587286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.592302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.592552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.592580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.597700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.597958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.597986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.602865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.603132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.603159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.608089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.608344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.608372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.613125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.613375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.613402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.618003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.618279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.623069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.623328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.623356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.628216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.628466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.628509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.633507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.633762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.633790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.638704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.638974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.639018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.643904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.644160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.644188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.648952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.649203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.649230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.654025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.654289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.654315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.659183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.659430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.659458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.664319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.664567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.664611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.669355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.669619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.669661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.675094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.675343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.675371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.680291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.680571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.680599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.685440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.685704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.685731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.690628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.601 [2024-05-16 20:23:03.690884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.601 [2024-05-16 20:23:03.690912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.601 [2024-05-16 20:23:03.695802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.696056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.696083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.700846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.701104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.701131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.706082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.706332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.706360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.711262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.711510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.711538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.716556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.716831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.716880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.722365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.722615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.722648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.727659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.727917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.727945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.732906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.733154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.733182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.738875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.739140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.739167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.602 [2024-05-16 20:23:03.743772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.602 [2024-05-16 20:23:03.744054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.602 [2024-05-16 20:23:03.744082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.748359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.748613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.748641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.752961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.753212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.753239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.757536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.757784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.757811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.762973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.763265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.763293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.769100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.769383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.769411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.775386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.775653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.775681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.780956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.781221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.781249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.785742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.785999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.786027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.790692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.790950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.790978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.795322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.795572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.795600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.799731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.799988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.800017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.804208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.804457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.804484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.810040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.810334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.810362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.815720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.815979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.816007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.822108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.822411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.822438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.828212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.828476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.828503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.833809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.834066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.834095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.839620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.839880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.839909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.844603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.844861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.844890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.849832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.850094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.850122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.854839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.855099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.855141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.861 [2024-05-16 20:23:03.860027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.861 [2024-05-16 20:23:03.860277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.861 [2024-05-16 20:23:03.860312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.865180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.865432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.865460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.870355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.870604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.870645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.875372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.875624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.875651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.880292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.880553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.880581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.885113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.885364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.885392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.890005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.890255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.890283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.895299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.895548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.895575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.900278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.900526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.900554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.904871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.905130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.905158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.909431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.909682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.913957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.914207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.914234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.918511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.918758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.918786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.923108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.923356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.923383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.928197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.928447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.934146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.934405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.934432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.939981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.940296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.940324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.946221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.946494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.946529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.952899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.953206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.959523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.959903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.959945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.966011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.966263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.966292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.972525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.972835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.972870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.979081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.979406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.979434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.985941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.986193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.986221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.991534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.991784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.991813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:03.996309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:03.996560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:03.996588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.862 [2024-05-16 20:23:04.000835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:16.862 [2024-05-16 20:23:04.001100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.862 [2024-05-16 20:23:04.001127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.005541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.005795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.005823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.010278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.010535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.010563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.014845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.015106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.015133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.019749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.020006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.020034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.024936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.025187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.025214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.029516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.029765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.029793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.034140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.034391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.034419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.038696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.038954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.038982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.043590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.043860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.043887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.048637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.048893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.048920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.053551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.053849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.053888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.060069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.060374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.060402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.065864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.066116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.066144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.070943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.071193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.071221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.075467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.075718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.075746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.080010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.080259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.080287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.084760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.122 [2024-05-16 20:23:04.085015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.122 [2024-05-16 20:23:04.085049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.122 [2024-05-16 20:23:04.089748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.090007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.090035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.094249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.094501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.094529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.098761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.099020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.099049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.103322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.103571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.103598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.108004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.108258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.108286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.112512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.112764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.112791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.117014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.117264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.117291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.121526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.121858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.121885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.126105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.126391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.126418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.130835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.131104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.131131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.135396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.135645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.135672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.139909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.140160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.140187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.144481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.144731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.144758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.149397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.149647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.149675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.154221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.154471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.154499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.158749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.159007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.159035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.163207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.163457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.163484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.167930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.168184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.168212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.172513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.172766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.172793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.177114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.177364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.177391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.181642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.181901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.181929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.186134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.186386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.186414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.190599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.190848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.190882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.195500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.195770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.195798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.200961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.201287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.201315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.206920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.207171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.207204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.212318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.212629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.212657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.218358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.218673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.218700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.223904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.224154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.224182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.228505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.228755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.228783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.233120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.233370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.233398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.237693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.237986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.238014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.242383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.242634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.242661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.246899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.247150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.247177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.251442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.251693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.251721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.255969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.256220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.256248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.260558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.260808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.260835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.123 [2024-05-16 20:23:04.265428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.123 [2024-05-16 20:23:04.265714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.123 [2024-05-16 20:23:04.265741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.270266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.270520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.270547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.274863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.275116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.275144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.279347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.279630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.279657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.283915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.284170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.284198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.288460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.288710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.288743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.292975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.293224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.293252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.297475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.297723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.297751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.301942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.302194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.302222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.306439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.306690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.306717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.310915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.311168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.311195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.315494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.315741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.315768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.319990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.320242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.320269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.324530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.324778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.324805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.329067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.329353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.329381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.333637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.333997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.334025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.338291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.383 [2024-05-16 20:23:04.338539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.383 [2024-05-16 20:23:04.338567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.383 [2024-05-16 20:23:04.342769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.343029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.343058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.347271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.347519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.347547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.351752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.352009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.352037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.356257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.356508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.356535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.360785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.361055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.361082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.365367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.365644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.365671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.369962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.370212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.370240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.374505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.374757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.374785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.379067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.379346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.383662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.383921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.383949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.388172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.388422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.388449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.392657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.392918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.392946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.397273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.397521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.397548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.401780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.402036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.402064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.406338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.406586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.406618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.410866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.411116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.411143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.415410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.415659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.415687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.420054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.420305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.420333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.425307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.425628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.431380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.431630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.431658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.435967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.436220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.436263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.440556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.440805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.440833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.445102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.445348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.445376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.449669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.449931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.449959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.454201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.454450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.454478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.458774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.459031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.459059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.463395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.463651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.463680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.384 [2024-05-16 20:23:04.468026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.384 [2024-05-16 20:23:04.468281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.384 [2024-05-16 20:23:04.468312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.472673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.472934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.472964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.477314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.477574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.477603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.481960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.482216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.482245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.486578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.486831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.486869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.491152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.491402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.491430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.495732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.495990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.496018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.500297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.500543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.500571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.504827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.505087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.505115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.509407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.509656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.509684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.513985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.514232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.514260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.518569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.518819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.518846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.385 [2024-05-16 20:23:04.523214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.385 [2024-05-16 20:23:04.523462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.385 [2024-05-16 20:23:04.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.528006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.528258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.528291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.532648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.532911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.532939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.537236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.537497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.537525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.543237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.543542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.543570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.548377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.548632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.548662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.553054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.553310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.553339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.557643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.557917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.562241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.562491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.562524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.566945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.567231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.571621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.571882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.571918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.576291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.576538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.576566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.580993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.581244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.581272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.585700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.585957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.585985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.590333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.590581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.590609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.595077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.595341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.595369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.599741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.599997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.600026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.604649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.604929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.604957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.645 [2024-05-16 20:23:04.610552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.645 [2024-05-16 20:23:04.610836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.645 [2024-05-16 20:23:04.610873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.615588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.615839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.615876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.620511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.620760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.620787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.625600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.625868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.625896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.630173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.630421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.630449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.635242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.635548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.635576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.641266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.641592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.641621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.646399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.646680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.646708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.652878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.653145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.653172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.658402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.658658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.658686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.664446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.664723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.664751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.670256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.670353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.670381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.676458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.676737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.682391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.682652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.682680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.689320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.689611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.689639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.695213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.695496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.695524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.700276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.700529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.700557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.704867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.705115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.705142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.709920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.710170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.710198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.715017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.715269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.715297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.720106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.720354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.720382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.724700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.724959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.724988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.729529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.729779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.729806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.734562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.734812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.734840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.739702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.739961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.739989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.745461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.745711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.745739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.749965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.750218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.750251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.754456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.646 [2024-05-16 20:23:04.754706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.646 [2024-05-16 20:23:04.754734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.646 [2024-05-16 20:23:04.759082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.647 [2024-05-16 20:23:04.759332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.647 [2024-05-16 20:23:04.759361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.647 [2024-05-16 20:23:04.764267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.647 [2024-05-16 20:23:04.764517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.647 [2024-05-16 20:23:04.764545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.647 [2024-05-16 20:23:04.768804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.647 [2024-05-16 20:23:04.769064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.647 [2024-05-16 20:23:04.769092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.647 [2024-05-16 20:23:04.773813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.647 [2024-05-16 20:23:04.774119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.647 [2024-05-16 20:23:04.774147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.647 [2024-05-16 20:23:04.779806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.647 [2024-05-16 20:23:04.780182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.647 [2024-05-16 20:23:04.780210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.647 [2024-05-16 20:23:04.784674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.647 [2024-05-16 20:23:04.784934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.647 [2024-05-16 20:23:04.784963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.789548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.789834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.789869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.794237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.794495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.794523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.799225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.799531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.805001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.805289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.805316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.809737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.810000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.810029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.814434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.814682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.814710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.905 [2024-05-16 20:23:04.818873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x120aa00) with pdu=0x2000190fef90 00:24:17.905 [2024-05-16 20:23:04.818999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-05-16 20:23:04.819026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.905 00:24:17.905 Latency(us) 00:24:17.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.905 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:17.905 nvme0n1 : 2.00 5885.36 735.67 0.00 0.00 2711.23 1832.58 7524.50 00:24:17.905 =================================================================================================================== 00:24:17.905 Total : 5885.36 735.67 0.00 0.00 2711.23 1832.58 7524.50 00:24:17.905 0 00:24:17.905 20:23:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:17.905 20:23:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:17.905 20:23:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:17.905 | .driver_specific 00:24:17.905 | .nvme_error 00:24:17.905 | .status_code 00:24:17.905 | .command_transient_transport_error' 00:24:17.905 20:23:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 313414 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 313414 ']' 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 313414 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 313414 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 313414' 00:24:18.164 killing process with pid 313414 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 313414 00:24:18.164 Received shutdown signal, test time was about 2.000000 seconds 00:24:18.164 00:24:18.164 Latency(us) 00:24:18.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.164 =================================================================================================================== 00:24:18.164 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.164 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 313414 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 312052 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 312052 ']' 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 312052 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 312052 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 312052' 00:24:18.422 killing process with pid 312052 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 312052 00:24:18.422 [2024-05-16 20:23:05.418179] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:18.422 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 312052 00:24:18.681 00:24:18.681 real 0m15.367s 00:24:18.681 user 0m30.264s 00:24:18.681 sys 0m4.235s 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.681 ************************************ 00:24:18.681 END TEST nvmf_digest_error 00:24:18.681 ************************************ 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:18.681 rmmod nvme_tcp 00:24:18.681 rmmod nvme_fabrics 00:24:18.681 rmmod nvme_keyring 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 312052 ']' 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 312052 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 312052 ']' 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 312052 00:24:18.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (312052) - No such process 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 312052 is not found' 00:24:18.681 Process with pid 312052 is not found 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.681 20:23:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.215 20:23:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.215 00:24:21.215 real 0m35.854s 00:24:21.215 user 1m3.346s 00:24:21.215 sys 0m10.042s 00:24:21.215 20:23:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:21.215 20:23:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:21.215 ************************************ 00:24:21.215 END TEST nvmf_digest 00:24:21.215 ************************************ 00:24:21.215 20:23:07 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:24:21.215 20:23:07 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:24:21.215 20:23:07 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:24:21.215 20:23:07 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:21.215 20:23:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:21.215 20:23:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:21.215 20:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.215 ************************************ 00:24:21.215 START TEST nvmf_bdevperf 00:24:21.215 ************************************ 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:21.215 * Looking for test storage... 00:24:21.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.215 20:23:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:23.117 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:23.117 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:23.117 Found net devices under 0000:09:00.0: cvl_0_0 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:23.117 Found net devices under 0000:09:00.1: cvl_0_1 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:24:23.117 00:24:23.117 --- 10.0.0.2 ping statistics --- 00:24:23.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.117 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:23.117 00:24:23.117 --- 10.0.0.1 ping statistics --- 00:24:23.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.117 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=315767 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 315767 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 315767 ']' 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:23.117 20:23:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.117 [2024-05-16 20:23:10.041070] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:23.117 [2024-05-16 20:23:10.041189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.117 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.117 [2024-05-16 20:23:10.108482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:23.117 [2024-05-16 20:23:10.221543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.117 [2024-05-16 20:23:10.221592] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.117 [2024-05-16 20:23:10.221620] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.117 [2024-05-16 20:23:10.221631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.117 [2024-05-16 20:23:10.221641] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.117 [2024-05-16 20:23:10.221729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.117 [2024-05-16 20:23:10.221785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.117 [2024-05-16 20:23:10.221783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.375 [2024-05-16 20:23:10.355894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.375 Malloc0 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.375 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.376 [2024-05-16 20:23:10.419158] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:23.376 [2024-05-16 20:23:10.419484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:23.376 { 00:24:23.376 "params": { 00:24:23.376 "name": "Nvme$subsystem", 00:24:23.376 "trtype": "$TEST_TRANSPORT", 00:24:23.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.376 "adrfam": "ipv4", 00:24:23.376 "trsvcid": "$NVMF_PORT", 00:24:23.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.376 "hdgst": ${hdgst:-false}, 00:24:23.376 "ddgst": ${ddgst:-false} 00:24:23.376 }, 00:24:23.376 "method": "bdev_nvme_attach_controller" 00:24:23.376 } 00:24:23.376 EOF 00:24:23.376 )") 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:23.376 20:23:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:23.376 "params": { 00:24:23.376 "name": "Nvme1", 00:24:23.376 "trtype": "tcp", 00:24:23.376 "traddr": "10.0.0.2", 00:24:23.376 "adrfam": "ipv4", 00:24:23.376 "trsvcid": "4420", 00:24:23.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.376 "hdgst": false, 00:24:23.376 "ddgst": false 00:24:23.376 }, 00:24:23.376 "method": "bdev_nvme_attach_controller" 00:24:23.376 }' 00:24:23.376 [2024-05-16 20:23:10.463760] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:23.376 [2024-05-16 20:23:10.463844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315909 ] 00:24:23.376 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.634 [2024-05-16 20:23:10.525193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.634 [2024-05-16 20:23:10.634329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.891 Running I/O for 1 seconds... 00:24:24.826 00:24:24.826 Latency(us) 00:24:24.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.826 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:24.826 Verification LBA range: start 0x0 length 0x4000 00:24:24.826 Nvme1n1 : 1.01 8792.87 34.35 0.00 0.00 14460.72 1735.49 15534.46 00:24:24.826 =================================================================================================================== 00:24:24.826 Total : 8792.87 34.35 0.00 0.00 14460.72 1735.49 15534.46 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=316054 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.085 { 00:24:25.085 "params": { 00:24:25.085 "name": "Nvme$subsystem", 00:24:25.085 "trtype": "$TEST_TRANSPORT", 00:24:25.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.085 "adrfam": "ipv4", 00:24:25.085 "trsvcid": "$NVMF_PORT", 00:24:25.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.085 "hdgst": ${hdgst:-false}, 00:24:25.085 "ddgst": ${ddgst:-false} 00:24:25.085 }, 00:24:25.085 "method": "bdev_nvme_attach_controller" 00:24:25.085 } 00:24:25.085 EOF 00:24:25.085 )") 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:25.085 20:23:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:25.085 "params": { 00:24:25.085 "name": "Nvme1", 00:24:25.085 "trtype": "tcp", 00:24:25.085 "traddr": "10.0.0.2", 00:24:25.085 "adrfam": "ipv4", 00:24:25.085 "trsvcid": "4420", 00:24:25.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.085 "hdgst": false, 00:24:25.085 "ddgst": false 00:24:25.085 }, 00:24:25.085 "method": "bdev_nvme_attach_controller" 00:24:25.085 }' 00:24:25.085 [2024-05-16 20:23:12.104107] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:25.085 [2024-05-16 20:23:12.104206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316054 ] 00:24:25.085 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.085 [2024-05-16 20:23:12.162521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.343 [2024-05-16 20:23:12.269226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.600 Running I/O for 15 seconds... 00:24:28.131 20:23:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 315767 00:24:28.131 20:23:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:28.132 [2024-05-16 20:23:15.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.132 [2024-05-16 20:23:15.077174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.077948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.132 [2024-05-16 20:23:15.077978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.077993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.132 [2024-05-16 20:23:15.078519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.132 [2024-05-16 20:23:15.078536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.078973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.078987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.133 [2024-05-16 20:23:15.079875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.133 [2024-05-16 20:23:15.079916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.079932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.079945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.079960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.079977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.079997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.080985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.134 [2024-05-16 20:23:15.081191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.134 [2024-05-16 20:23:15.081222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.135 [2024-05-16 20:23:15.081254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.135 [2024-05-16 20:23:15.081290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.135 [2024-05-16 20:23:15.081330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.135 [2024-05-16 20:23:15.081362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.135 [2024-05-16 20:23:15.081394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.135 [2024-05-16 20:23:15.081425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f9a0 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.081462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.135 [2024-05-16 20:23:15.081474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.135 [2024-05-16 20:23:15.081486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37576 len:8 PRP1 0x0 PRP2 0x0 00:24:28.135 [2024-05-16 20:23:15.081500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081571] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x249f9a0 was disconnected and freed. reset controller. 00:24:28.135 [2024-05-16 20:23:15.081655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.135 [2024-05-16 20:23:15.081678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.135 [2024-05-16 20:23:15.081709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.135 [2024-05-16 20:23:15.081746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.135 [2024-05-16 20:23:15.081775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.135 [2024-05-16 20:23:15.081789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.085675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.085724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.086391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.086435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.086451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.086691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.086967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.135 [2024-05-16 20:23:15.086991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.135 [2024-05-16 20:23:15.087009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.135 [2024-05-16 20:23:15.090642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.135 [2024-05-16 20:23:15.100059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.100538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.100589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.100607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.100847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.101102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.135 [2024-05-16 20:23:15.101126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.135 [2024-05-16 20:23:15.101140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.135 [2024-05-16 20:23:15.104761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.135 [2024-05-16 20:23:15.113957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.114364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.114395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.114413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.114653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.114909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.135 [2024-05-16 20:23:15.114933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.135 [2024-05-16 20:23:15.114948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.135 [2024-05-16 20:23:15.118567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.135 [2024-05-16 20:23:15.127979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.128355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.128386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.128404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.128654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.128918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.135 [2024-05-16 20:23:15.128944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.135 [2024-05-16 20:23:15.128958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.135 [2024-05-16 20:23:15.132574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.135 [2024-05-16 20:23:15.141972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.142366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.142397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.142414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.142655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.142912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.135 [2024-05-16 20:23:15.142937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.135 [2024-05-16 20:23:15.142952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.135 [2024-05-16 20:23:15.146566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.135 [2024-05-16 20:23:15.155967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.156370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.156396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.156411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.156651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.135 [2024-05-16 20:23:15.156915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.135 [2024-05-16 20:23:15.156939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.135 [2024-05-16 20:23:15.156954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.135 [2024-05-16 20:23:15.160570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.135 [2024-05-16 20:23:15.169968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.135 [2024-05-16 20:23:15.170341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.135 [2024-05-16 20:23:15.170386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.135 [2024-05-16 20:23:15.170402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.135 [2024-05-16 20:23:15.170641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.170898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.170922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.170942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.174561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.183962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.184338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.184370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.184402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.184643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.184900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.184925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.184939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.188555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.197953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.198316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.198347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.198364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.198604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.198849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.198883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.198897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.202510] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.211905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.212305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.212336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.212353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.212593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.212838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.212876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.212893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.216509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.225908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.226310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.226341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.226358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.226598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.226842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.226876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.226893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.230509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.239905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.240294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.240325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.240342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.240582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.240827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.240850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.240877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.244494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.253923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.254307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.254338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.254355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.254595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.254840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.254874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.254890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.258503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.136 [2024-05-16 20:23:15.267897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.136 [2024-05-16 20:23:15.268354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.136 [2024-05-16 20:23:15.268396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.136 [2024-05-16 20:23:15.268412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.136 [2024-05-16 20:23:15.268674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.136 [2024-05-16 20:23:15.268942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.136 [2024-05-16 20:23:15.268967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.136 [2024-05-16 20:23:15.268982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.136 [2024-05-16 20:23:15.272672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.395 [2024-05-16 20:23:15.281961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.395 [2024-05-16 20:23:15.282367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-05-16 20:23:15.282398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.395 [2024-05-16 20:23:15.282416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.395 [2024-05-16 20:23:15.282657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.395 [2024-05-16 20:23:15.282925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.395 [2024-05-16 20:23:15.282951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.395 [2024-05-16 20:23:15.282966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.395 [2024-05-16 20:23:15.286626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.395 [2024-05-16 20:23:15.296025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.395 [2024-05-16 20:23:15.296413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-05-16 20:23:15.296445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.395 [2024-05-16 20:23:15.296463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.395 [2024-05-16 20:23:15.296703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.395 [2024-05-16 20:23:15.296960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.395 [2024-05-16 20:23:15.296984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.395 [2024-05-16 20:23:15.296998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.395 [2024-05-16 20:23:15.300613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.395 [2024-05-16 20:23:15.310006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.395 [2024-05-16 20:23:15.310408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-05-16 20:23:15.310439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.395 [2024-05-16 20:23:15.310456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.310696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.310953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.310977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.310992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.314615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.324054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.324531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.324588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.324605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.324846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.325102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.325125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.325140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.328754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.338160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.338557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.338589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.338606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.338846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.339101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.339125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.339140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.342754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.352157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.352623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.352673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.352691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.352942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.353187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.353211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.353226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.356842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.366241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.366701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.366759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.366777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.367027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.367273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.367296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.367311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.370936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.380326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.380688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.380719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.380736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.380987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.381232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.381255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.381270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.384892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.394291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.394679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.394710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.394727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.394981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.395227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.395251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.395265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.396 [2024-05-16 20:23:15.398892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.396 [2024-05-16 20:23:15.408285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.396 [2024-05-16 20:23:15.408674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-05-16 20:23:15.408704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.396 [2024-05-16 20:23:15.408721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.396 [2024-05-16 20:23:15.408972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.396 [2024-05-16 20:23:15.409223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.396 [2024-05-16 20:23:15.409247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.396 [2024-05-16 20:23:15.409262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.412891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.422281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.422668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.422699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.422717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.422969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.423215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.423238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.423253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.426874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.436261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.436665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.436697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.436714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.436968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.437213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.437237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.437252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.440874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.450257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.450646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.450677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.450694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.450947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.451192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.451216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.451231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.454850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.464236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.464620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.464650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.464667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.464918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.465164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.465188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.465203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.468815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.478214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.478601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.478631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.478649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.478900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.479146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.479169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.479184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.482798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.492193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.492581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.492612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.492629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.492880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.493125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.493149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.493164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.496777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.506186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.506565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-05-16 20:23:15.506596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.397 [2024-05-16 20:23:15.506619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.397 [2024-05-16 20:23:15.506872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.397 [2024-05-16 20:23:15.507118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.397 [2024-05-16 20:23:15.507142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.397 [2024-05-16 20:23:15.507157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.397 [2024-05-16 20:23:15.510794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.397 [2024-05-16 20:23:15.520210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.397 [2024-05-16 20:23:15.520607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-05-16 20:23:15.520638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.398 [2024-05-16 20:23:15.520655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.398 [2024-05-16 20:23:15.520905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.398 [2024-05-16 20:23:15.521168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.398 [2024-05-16 20:23:15.521192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.398 [2024-05-16 20:23:15.521207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.398 [2024-05-16 20:23:15.524821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.398 [2024-05-16 20:23:15.534227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.398 [2024-05-16 20:23:15.534601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-05-16 20:23:15.534632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.398 [2024-05-16 20:23:15.534649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.398 [2024-05-16 20:23:15.534899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.398 [2024-05-16 20:23:15.535152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.398 [2024-05-16 20:23:15.535176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.398 [2024-05-16 20:23:15.535191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.398 [2024-05-16 20:23:15.538890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.657 [2024-05-16 20:23:15.548182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.657 [2024-05-16 20:23:15.548550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-05-16 20:23:15.548581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.657 [2024-05-16 20:23:15.548599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.657 [2024-05-16 20:23:15.548840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.657 [2024-05-16 20:23:15.549095] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.657 [2024-05-16 20:23:15.549125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.657 [2024-05-16 20:23:15.549141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.657 [2024-05-16 20:23:15.552762] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.657 [2024-05-16 20:23:15.562160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.657 [2024-05-16 20:23:15.562561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-05-16 20:23:15.562591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.657 [2024-05-16 20:23:15.562608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.657 [2024-05-16 20:23:15.562848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.657 [2024-05-16 20:23:15.563105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.657 [2024-05-16 20:23:15.563128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.657 [2024-05-16 20:23:15.563143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.657 [2024-05-16 20:23:15.566758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.657 [2024-05-16 20:23:15.576163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.657 [2024-05-16 20:23:15.576525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-05-16 20:23:15.576556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.657 [2024-05-16 20:23:15.576573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.657 [2024-05-16 20:23:15.576814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.657 [2024-05-16 20:23:15.577069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.657 [2024-05-16 20:23:15.577093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.657 [2024-05-16 20:23:15.577108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.657 [2024-05-16 20:23:15.580722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.657 [2024-05-16 20:23:15.590128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.657 [2024-05-16 20:23:15.590515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-05-16 20:23:15.590545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.657 [2024-05-16 20:23:15.590562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.657 [2024-05-16 20:23:15.590803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.657 [2024-05-16 20:23:15.591056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.657 [2024-05-16 20:23:15.591081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.657 [2024-05-16 20:23:15.591096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.657 [2024-05-16 20:23:15.594713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.657 [2024-05-16 20:23:15.604121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.657 [2024-05-16 20:23:15.604489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.604519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.604536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.604777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.605031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.605055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.605070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.608690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.618096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.618464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.618495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.618512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.618752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.619007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.619031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.619046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.622661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.632061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.632446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.632476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.632493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.632733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.632989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.633013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.633028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.636644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.646052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.646417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.646448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.646465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.646709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.646966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.646990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.647005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.650619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.660016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.660380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.660410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.660428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.660668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.660923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.660947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.660962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.664575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.673968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.674341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.674372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.674389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.674629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.674885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.674909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.674924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.678536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.687924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.688309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.688339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.688356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.688596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.688841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.688875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.688896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.692511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.701952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.702349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.702380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.702397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.702637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.702893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.702918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.702932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.706548] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.715945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.716310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.716341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.716358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.716598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.716843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.716877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.716893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.720507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.729905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.730270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.730301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.730318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.730558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.730803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.658 [2024-05-16 20:23:15.730826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.658 [2024-05-16 20:23:15.730841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.658 [2024-05-16 20:23:15.734466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.658 [2024-05-16 20:23:15.743849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.658 [2024-05-16 20:23:15.744267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-05-16 20:23:15.744297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-05-16 20:23:15.744314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.658 [2024-05-16 20:23:15.744555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.658 [2024-05-16 20:23:15.744800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.659 [2024-05-16 20:23:15.744823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.659 [2024-05-16 20:23:15.744838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.659 [2024-05-16 20:23:15.748460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.659 [2024-05-16 20:23:15.757861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.659 [2024-05-16 20:23:15.758235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-05-16 20:23:15.758266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.659 [2024-05-16 20:23:15.758283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.659 [2024-05-16 20:23:15.758523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.659 [2024-05-16 20:23:15.758768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.659 [2024-05-16 20:23:15.758792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.659 [2024-05-16 20:23:15.758806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.659 [2024-05-16 20:23:15.762429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.659 [2024-05-16 20:23:15.771816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.659 [2024-05-16 20:23:15.772226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-05-16 20:23:15.772256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.659 [2024-05-16 20:23:15.772273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.659 [2024-05-16 20:23:15.772514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.659 [2024-05-16 20:23:15.772758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.659 [2024-05-16 20:23:15.772782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.659 [2024-05-16 20:23:15.772797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.659 [2024-05-16 20:23:15.776421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.659 [2024-05-16 20:23:15.785810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.659 [2024-05-16 20:23:15.786200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-05-16 20:23:15.786231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.659 [2024-05-16 20:23:15.786248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.659 [2024-05-16 20:23:15.786488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.659 [2024-05-16 20:23:15.786739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.659 [2024-05-16 20:23:15.786762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.659 [2024-05-16 20:23:15.786777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.659 [2024-05-16 20:23:15.790401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.659 [2024-05-16 20:23:15.799871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.659 [2024-05-16 20:23:15.800234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-05-16 20:23:15.800266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.659 [2024-05-16 20:23:15.800283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.659 [2024-05-16 20:23:15.800525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.659 [2024-05-16 20:23:15.800781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.659 [2024-05-16 20:23:15.800810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.659 [2024-05-16 20:23:15.800827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.918 [2024-05-16 20:23:15.804493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.918 [2024-05-16 20:23:15.813981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.918 [2024-05-16 20:23:15.814378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.918 [2024-05-16 20:23:15.814409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.814426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.814667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.814925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.814949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.814964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.818577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.827991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.828385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.828415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.828432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.828673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.828931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.828955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.828970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.832594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.842005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.842390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.842421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.842438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.842678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.842934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.842958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.842973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.846594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.856016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.856380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.856410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.856427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.856668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.856923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.856947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.856962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.860576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.869986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.870384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.870414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.870432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.870672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.870928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.870952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.870967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.874580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.883984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.884359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.884390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.884412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.884653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.884911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.884936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.884951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.888571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.897986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.898387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.898417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.898434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.898675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.898930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.898955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.898970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.902590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.912001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.912383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.912414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.912431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.912671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.912932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.912956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.912971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.916590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.925997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.926385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.926415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.926432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.926673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.926935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.926959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.926975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.919 [2024-05-16 20:23:15.930596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.919 [2024-05-16 20:23:15.940013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.919 [2024-05-16 20:23:15.940398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.919 [2024-05-16 20:23:15.940429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.919 [2024-05-16 20:23:15.940446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.919 [2024-05-16 20:23:15.940686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.919 [2024-05-16 20:23:15.940943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.919 [2024-05-16 20:23:15.940967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.919 [2024-05-16 20:23:15.940982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:15.944605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:15.954017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:15.954384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:15.954414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:15.954431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:15.954672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:15.954929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:15.954954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:15.954969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:15.958596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:15.968030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:15.968427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:15.968458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:15.968475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:15.968716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:15.968972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:15.968996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:15.969011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:15.972632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:15.982068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:15.982454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:15.982484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:15.982502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:15.982743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:15.982999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:15.983024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:15.983039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:15.986663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:15.996099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:15.996463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:15.996494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:15.996511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:15.996752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:15.997009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:15.997033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:15.997049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:16.000666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:16.010086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:16.010451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:16.010482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:16.010500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:16.010740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:16.010996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:16.011021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:16.011037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:16.014664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:16.024157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:16.024520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:16.024552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:16.024574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:16.024815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:16.025070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:16.025096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:16.025111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:16.028729] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:16.038159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:16.038559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:16.038589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:16.038606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:16.038847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:16.039101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:16.039125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:16.039140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:16.042765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.920 [2024-05-16 20:23:16.052200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.920 [2024-05-16 20:23:16.052595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.920 [2024-05-16 20:23:16.052626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:28.920 [2024-05-16 20:23:16.052643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:28.920 [2024-05-16 20:23:16.052895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:28.920 [2024-05-16 20:23:16.053148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.920 [2024-05-16 20:23:16.053184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.920 [2024-05-16 20:23:16.053198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.920 [2024-05-16 20:23:16.056821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.180 [2024-05-16 20:23:16.066189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.180 [2024-05-16 20:23:16.066580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.180 [2024-05-16 20:23:16.066612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.180 [2024-05-16 20:23:16.066629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.180 [2024-05-16 20:23:16.066898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.180 [2024-05-16 20:23:16.067144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.180 [2024-05-16 20:23:16.067174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.180 [2024-05-16 20:23:16.067190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.180 [2024-05-16 20:23:16.070874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.180 [2024-05-16 20:23:16.080090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.180 [2024-05-16 20:23:16.080503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.180 [2024-05-16 20:23:16.080534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.180 [2024-05-16 20:23:16.080551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.180 [2024-05-16 20:23:16.080792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.180 [2024-05-16 20:23:16.081048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.180 [2024-05-16 20:23:16.081073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.180 [2024-05-16 20:23:16.081088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.180 [2024-05-16 20:23:16.084734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.180 [2024-05-16 20:23:16.094165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.180 [2024-05-16 20:23:16.094545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.180 [2024-05-16 20:23:16.094576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.180 [2024-05-16 20:23:16.094593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.180 [2024-05-16 20:23:16.094833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.180 [2024-05-16 20:23:16.095089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.180 [2024-05-16 20:23:16.095124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.180 [2024-05-16 20:23:16.095140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.180 [2024-05-16 20:23:16.098987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.180 [2024-05-16 20:23:16.108183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.180 [2024-05-16 20:23:16.108569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.180 [2024-05-16 20:23:16.108600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.180 [2024-05-16 20:23:16.108617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.180 [2024-05-16 20:23:16.108866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.180 [2024-05-16 20:23:16.109112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.180 [2024-05-16 20:23:16.109136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.180 [2024-05-16 20:23:16.109150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.180 [2024-05-16 20:23:16.112765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.180 [2024-05-16 20:23:16.122184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.180 [2024-05-16 20:23:16.122579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.180 [2024-05-16 20:23:16.122610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.180 [2024-05-16 20:23:16.122627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.180 [2024-05-16 20:23:16.122878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.180 [2024-05-16 20:23:16.123124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.180 [2024-05-16 20:23:16.123147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.180 [2024-05-16 20:23:16.123162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.180 [2024-05-16 20:23:16.126776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.180 [2024-05-16 20:23:16.136179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.180 [2024-05-16 20:23:16.136548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.180 [2024-05-16 20:23:16.136578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.180 [2024-05-16 20:23:16.136595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.136835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.137090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.137114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.137129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.140747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.150149] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.150519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.150549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.150567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.150807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.151062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.151086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.151102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.154716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.164119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.164526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.164557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.164575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.164820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.165075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.165099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.165114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.168733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.178133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.178508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.178539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.178556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.178797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.179052] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.179076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.179091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.182704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.192106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.192473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.192504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.192521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.192761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.193019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.193043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.193058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.196674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.206076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.206439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.206469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.206486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.206726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.206982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.207006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.207027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.210643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.220045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.220433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.220464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.220481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.220721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.220978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.221002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.221017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.224633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.234034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.234432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.234462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.234480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.234720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.234976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.235000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.235015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.238630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.248026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.248411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.248441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.248458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.248698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.248954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.248978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.248993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.252604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.262000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.262397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.262431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.262449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.262690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.262947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.262971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.181 [2024-05-16 20:23:16.262986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.181 [2024-05-16 20:23:16.266601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.181 [2024-05-16 20:23:16.276004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.181 [2024-05-16 20:23:16.276377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.181 [2024-05-16 20:23:16.276407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.181 [2024-05-16 20:23:16.276424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.181 [2024-05-16 20:23:16.276664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.181 [2024-05-16 20:23:16.276921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.181 [2024-05-16 20:23:16.276945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.182 [2024-05-16 20:23:16.276961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.182 [2024-05-16 20:23:16.280575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.182 [2024-05-16 20:23:16.289976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.182 [2024-05-16 20:23:16.290374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.182 [2024-05-16 20:23:16.290404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.182 [2024-05-16 20:23:16.290421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.182 [2024-05-16 20:23:16.290661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.182 [2024-05-16 20:23:16.290916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.182 [2024-05-16 20:23:16.290940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.182 [2024-05-16 20:23:16.290956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.182 [2024-05-16 20:23:16.294572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.182 [2024-05-16 20:23:16.303970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.182 [2024-05-16 20:23:16.304355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.182 [2024-05-16 20:23:16.304385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.182 [2024-05-16 20:23:16.304402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.182 [2024-05-16 20:23:16.304642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.182 [2024-05-16 20:23:16.304904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.182 [2024-05-16 20:23:16.304928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.182 [2024-05-16 20:23:16.304943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.182 [2024-05-16 20:23:16.308557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.182 [2024-05-16 20:23:16.317960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.182 [2024-05-16 20:23:16.318304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.182 [2024-05-16 20:23:16.318334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.182 [2024-05-16 20:23:16.318352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.182 [2024-05-16 20:23:16.318591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.182 [2024-05-16 20:23:16.318837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.182 [2024-05-16 20:23:16.318871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.182 [2024-05-16 20:23:16.318887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.182 [2024-05-16 20:23:16.322573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.441 [2024-05-16 20:23:16.332089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.332481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.332512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.332530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.332771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.333028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.333052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.333067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.336683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.346086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.346473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.346504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.346521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.346761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.347018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.347042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.347057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.350677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.360084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.360448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.360479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.360496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.360735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.360992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.361016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.361031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.364645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.374062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.374438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.374469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.374486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.374726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.374982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.375006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.375021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.378637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.388034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.388425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.388455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.388472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.388712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.388969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.388994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.389009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.392623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.401314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.401703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.401729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.401748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.402015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.402236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.402256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.402268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.405305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.414622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.414959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.414987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.415002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.415225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.415427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.415446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.415458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.418437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.427937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.428269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.428310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.428325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.428549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.428766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.428786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.428798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.431774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.441230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.441604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.441632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.441647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.441902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.442 [2024-05-16 20:23:16.442110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.442 [2024-05-16 20:23:16.442136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.442 [2024-05-16 20:23:16.442149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.442 [2024-05-16 20:23:16.445181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.442 [2024-05-16 20:23:16.454590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.442 [2024-05-16 20:23:16.454965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.442 [2024-05-16 20:23:16.454993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.442 [2024-05-16 20:23:16.455008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.442 [2024-05-16 20:23:16.455239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.455456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.455476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.455488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.458500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.467999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.468337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.468379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.468394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.468619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.468837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.468879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.468895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.471916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.481308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.481683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.481725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.481741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.482012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.482234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.482254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.482266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.485280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.494870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.495241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.495268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.495284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.495515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.495733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.495753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.495765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.498915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.508260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.508637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.508678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.508693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.508959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.509182] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.509201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.509214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.512268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.521614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.521982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.522010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.522025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.522269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.522486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.522505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.522517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.525542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.535019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.535370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.535398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.535422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.535673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.535899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.535930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.535943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.538975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.548411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.548824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.548860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.548879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.549110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.549343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.549363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.549375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.552397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.561790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.562223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.562250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.562281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.562524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.562725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.562744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.562757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.565818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.443 [2024-05-16 20:23:16.575053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.443 [2024-05-16 20:23:16.575518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.443 [2024-05-16 20:23:16.575545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.443 [2024-05-16 20:23:16.575561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.443 [2024-05-16 20:23:16.575809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.443 [2024-05-16 20:23:16.576054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.443 [2024-05-16 20:23:16.576075] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.443 [2024-05-16 20:23:16.576092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.443 [2024-05-16 20:23:16.579169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.588607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.589000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.589028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.589044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.703 [2024-05-16 20:23:16.589276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.703 [2024-05-16 20:23:16.589510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.703 [2024-05-16 20:23:16.589532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.703 [2024-05-16 20:23:16.589545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.703 [2024-05-16 20:23:16.593028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.601931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.602371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.602399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.602415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.703 [2024-05-16 20:23:16.602659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.703 [2024-05-16 20:23:16.602888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.703 [2024-05-16 20:23:16.602910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.703 [2024-05-16 20:23:16.602924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.703 [2024-05-16 20:23:16.606005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.615248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.615594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.615623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.615638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.703 [2024-05-16 20:23:16.615895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.703 [2024-05-16 20:23:16.616104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.703 [2024-05-16 20:23:16.616124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.703 [2024-05-16 20:23:16.616136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.703 [2024-05-16 20:23:16.619169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.628651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.629086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.629115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.629131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.703 [2024-05-16 20:23:16.629360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.703 [2024-05-16 20:23:16.629577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.703 [2024-05-16 20:23:16.629597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.703 [2024-05-16 20:23:16.629609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.703 [2024-05-16 20:23:16.632632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.642063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.642496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.642522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.642553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.703 [2024-05-16 20:23:16.642783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.703 [2024-05-16 20:23:16.643029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.703 [2024-05-16 20:23:16.643049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.703 [2024-05-16 20:23:16.643062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.703 [2024-05-16 20:23:16.646078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.655342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.655717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.655758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.655773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.703 [2024-05-16 20:23:16.656017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.703 [2024-05-16 20:23:16.656238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.703 [2024-05-16 20:23:16.656257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.703 [2024-05-16 20:23:16.656269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.703 [2024-05-16 20:23:16.659285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.703 [2024-05-16 20:23:16.668518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.703 [2024-05-16 20:23:16.668906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.703 [2024-05-16 20:23:16.668946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.703 [2024-05-16 20:23:16.668962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.669202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.669420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.669439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.669451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.672480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.681913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.682302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.682330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.682345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.682577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.682794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.682814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.682826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.685864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.695281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.695652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.695694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.695710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.695976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.696198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.696217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.696230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.699246] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.708609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.708980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.709007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.709022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.709258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.709460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.709479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.709496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.712546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.721837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.722208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.722235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.722250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.722475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.722690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.722709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.722722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.725726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.735185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.735558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.735585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.735600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.735845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.736074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.736094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.736107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.739137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.748821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.749216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.749247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.749264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.749505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.749749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.749773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.749788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.753409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.762795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.763169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.763204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.763223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.763463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.763708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.763731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.763746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.767374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.776763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.777159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.777189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.777206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.777446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.777691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.777715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.777729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.781355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.790761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.791146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.791178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.791195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.791436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.791682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.791705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.791720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.704 [2024-05-16 20:23:16.795340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.704 [2024-05-16 20:23:16.804737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.704 [2024-05-16 20:23:16.805134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.704 [2024-05-16 20:23:16.805164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.704 [2024-05-16 20:23:16.805182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.704 [2024-05-16 20:23:16.805422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.704 [2024-05-16 20:23:16.805673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.704 [2024-05-16 20:23:16.805697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.704 [2024-05-16 20:23:16.805712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.705 [2024-05-16 20:23:16.809336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.705 [2024-05-16 20:23:16.818731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.705 [2024-05-16 20:23:16.819125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.705 [2024-05-16 20:23:16.819155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.705 [2024-05-16 20:23:16.819172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.705 [2024-05-16 20:23:16.819413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.705 [2024-05-16 20:23:16.819658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.705 [2024-05-16 20:23:16.819681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.705 [2024-05-16 20:23:16.819697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.705 [2024-05-16 20:23:16.823324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.705 [2024-05-16 20:23:16.832712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.705 [2024-05-16 20:23:16.833117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.705 [2024-05-16 20:23:16.833148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.705 [2024-05-16 20:23:16.833165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.705 [2024-05-16 20:23:16.833405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.705 [2024-05-16 20:23:16.833650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.705 [2024-05-16 20:23:16.833673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.705 [2024-05-16 20:23:16.833688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.705 [2024-05-16 20:23:16.837312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.705 [2024-05-16 20:23:16.846804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.705 [2024-05-16 20:23:16.847165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.964 [2024-05-16 20:23:16.847196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.964 [2024-05-16 20:23:16.847214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.964 [2024-05-16 20:23:16.847455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.964 [2024-05-16 20:23:16.847700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.847728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.847748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.851406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.860845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.861242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.861273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.861290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.861531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.861776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.861799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.861814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.865438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.874832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.875227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.875258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.875275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.875515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.875761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.875785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.875800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.879427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.888815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.889219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.889251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.889269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.889510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.889755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.889778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.889793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.893422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.902817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.903211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.903241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.903264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.903505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.903750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.903773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.903788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.907414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.916813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.917181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.917212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.917230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.917470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.917715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.917739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.917753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.921377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.930763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.931131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.931161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.931178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.931418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.931663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.931686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.931701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.935326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.944717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.945111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.945141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.945158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.945398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.945643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.945671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.945687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.949312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.958698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.959101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.959131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.959148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.959389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.959633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.959657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.959672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.963297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.972686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.973084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.973114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.973131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.973371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.973617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.973640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.973655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.977278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:16.986701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:16.987078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.965 [2024-05-16 20:23:16.987109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.965 [2024-05-16 20:23:16.987127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.965 [2024-05-16 20:23:16.987367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.965 [2024-05-16 20:23:16.987612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.965 [2024-05-16 20:23:16.987636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.965 [2024-05-16 20:23:16.987651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.965 [2024-05-16 20:23:16.991275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.965 [2024-05-16 20:23:17.000682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.965 [2024-05-16 20:23:17.001052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.001083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.001100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.001341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.001587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.001610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.001625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.005247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.014639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.015020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.015051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.015069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.015309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.015554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.015577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.015592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.019219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.028614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.029018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.029049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.029066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.029307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.029552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.029575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.029590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.033214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.042600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.043010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.043041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.043058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.043304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.043550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.043573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.043588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.047214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.056621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.056991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.057022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.057039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.057279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.057524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.057548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.057564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.061196] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.070595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.070992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.071023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.071040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.071281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.071526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.071549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.071565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.075197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.084592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.084984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.085014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.085031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.085271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.085516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.085540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.085560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.089185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.966 [2024-05-16 20:23:17.098598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.966 [2024-05-16 20:23:17.098979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.966 [2024-05-16 20:23:17.099010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:29.966 [2024-05-16 20:23:17.099027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:29.966 [2024-05-16 20:23:17.099268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:29.966 [2024-05-16 20:23:17.099513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.966 [2024-05-16 20:23:17.099536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.966 [2024-05-16 20:23:17.099551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.966 [2024-05-16 20:23:17.103182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.112713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.113135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.113168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.113186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.113428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.113696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.113721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.113737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.117407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.126617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.127005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.127037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.127055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.127296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.127540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.127564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.127579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.131210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.140637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.141023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.141055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.141073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.141313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.141558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.141582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.141596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.145225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.154633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.154987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.155018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.155036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.155277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.155523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.155546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.155561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.159188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.168592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.168991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.169021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.169039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.169279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.169524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.169548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.169562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.173193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.182603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.182978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.183009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.183027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.183272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.183518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.183541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.183557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.187188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.196619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.197016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.197047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.197064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.197304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.197549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.197572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.197588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.201220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.210635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.211033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.211064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.211082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.211322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.211567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.211590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.226 [2024-05-16 20:23:17.211605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.226 [2024-05-16 20:23:17.215242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.226 [2024-05-16 20:23:17.224650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.226 [2024-05-16 20:23:17.225004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.226 [2024-05-16 20:23:17.225035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.226 [2024-05-16 20:23:17.225053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.226 [2024-05-16 20:23:17.225293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.226 [2024-05-16 20:23:17.225537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.226 [2024-05-16 20:23:17.225560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.225583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.229219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.238639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.239013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.239045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.239063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.239303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.239548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.239571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.239586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.243220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.252622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.253023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.253053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.253070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.253311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.253555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.253578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.253593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.257216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.266615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.266966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.266998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.267015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.267256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.267501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.267524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.267539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.271176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.280581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.280954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.280989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.281008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.281248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.281493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.281516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.281531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.285157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.294557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.294903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.294934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.294951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.295192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.295437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.295461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.295476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.299101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.308582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.308998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.309046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.309064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.309303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.309548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.309572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.309587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.313218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.322604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.323023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.323072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.323090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.323330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.323581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.323605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.323620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.327242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.336628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.337042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.337072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.337089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.337330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.337575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.337598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.337613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.341236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.350623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.350990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.351021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.351038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.351279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.351524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.351548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.351563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.355186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.227 [2024-05-16 20:23:17.364610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.227 [2024-05-16 20:23:17.365020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.227 [2024-05-16 20:23:17.365050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.227 [2024-05-16 20:23:17.365067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.227 [2024-05-16 20:23:17.365307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.227 [2024-05-16 20:23:17.365560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.227 [2024-05-16 20:23:17.365585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.227 [2024-05-16 20:23:17.365601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.227 [2024-05-16 20:23:17.369301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.486 [2024-05-16 20:23:17.378587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.486 [2024-05-16 20:23:17.379054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.379085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.379103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.379344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.379589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.379612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.379627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.383253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.392641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.393012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.393043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.393061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.393301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.393546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.393569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.393584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.397212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.406619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.407035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.407066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.407083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.407324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.407569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.407593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.407607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.411235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.420629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.421007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.421039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.421062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.421303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.421549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.421573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.421587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.425215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.434609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.434978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.435009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.435026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.435266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.435510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.435533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.435548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.439174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.448561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.449009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.449063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.449081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.449321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.449566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.449590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.449605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.453229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.462629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.463025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.463056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.463073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.463314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.463559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.463588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.463603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.467230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.476622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.476994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.477025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.477042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.477282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.477527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.477551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.477566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.481194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.490585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.490961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.490992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.491009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.491250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.491495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.491518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.491533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.495160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.504560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.504925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.504957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.504975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.505221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.487 [2024-05-16 20:23:17.505467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.487 [2024-05-16 20:23:17.505490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.487 [2024-05-16 20:23:17.505505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.487 [2024-05-16 20:23:17.509134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.487 [2024-05-16 20:23:17.518554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.487 [2024-05-16 20:23:17.518930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.487 [2024-05-16 20:23:17.518961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.487 [2024-05-16 20:23:17.518978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.487 [2024-05-16 20:23:17.519219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.519464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.519487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.519502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.523131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.532533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.532919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.532950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.532968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.533209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.533454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.533477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.533492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.537122] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.546523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.546876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.546907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.546924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.547165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.547410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.547433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.547448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.551074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.560498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.560894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.560929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.560946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.561192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.561437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.561461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.561475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.565100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.574491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.574883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.574915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.574932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.575173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.575417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.575440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.575455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.579082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.588474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.588866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.588897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.588914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.589155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.589400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.589423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.589438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.593061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.602450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.602814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.602845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.602873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.603115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.603360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.603384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.603404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.607028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.616426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.616839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.488 [2024-05-16 20:23:17.616877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.488 [2024-05-16 20:23:17.616895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.488 [2024-05-16 20:23:17.617136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.488 [2024-05-16 20:23:17.617381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.488 [2024-05-16 20:23:17.617404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.488 [2024-05-16 20:23:17.617419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.488 [2024-05-16 20:23:17.621040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.488 [2024-05-16 20:23:17.630535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.488 [2024-05-16 20:23:17.630913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.749 [2024-05-16 20:23:17.630945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.749 [2024-05-16 20:23:17.630964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.749 [2024-05-16 20:23:17.631205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.749 [2024-05-16 20:23:17.631457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.749 [2024-05-16 20:23:17.631486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.749 [2024-05-16 20:23:17.631507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.749 [2024-05-16 20:23:17.635160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.749 [2024-05-16 20:23:17.644603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.749 [2024-05-16 20:23:17.644954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.749 [2024-05-16 20:23:17.644985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.749 [2024-05-16 20:23:17.645003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.749 [2024-05-16 20:23:17.645243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.749 [2024-05-16 20:23:17.645488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.645512] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.645527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.649151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.658538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.658935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.658966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.658984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.659224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.659469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.659493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.659508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.663133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.672520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.672891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.672921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.672938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.673179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.673423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.673447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.673462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.677089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.686480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.686868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.686899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.686916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.687157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.687402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.687425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.687439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.691065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.700455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.700858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.700889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.700907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.701147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.701399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.701422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.701437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.705063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.714457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.714860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.714891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.714909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.715149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.715395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.715418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.715432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.719056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.728458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.728829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.728868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.728888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.729129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.729374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.729398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.729413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.733037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.742427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.742827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.742864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.742884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.743125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.743370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.743393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.743408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.747036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.756436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.756827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.756866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.756886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.757127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.757373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.757396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.757410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.761038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.770447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.770814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.770844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.770873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.771119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.771364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.771387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.750 [2024-05-16 20:23:17.771402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.750 [2024-05-16 20:23:17.775027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.750 [2024-05-16 20:23:17.784433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.750 [2024-05-16 20:23:17.784794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.750 [2024-05-16 20:23:17.784825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.750 [2024-05-16 20:23:17.784842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.750 [2024-05-16 20:23:17.785091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.750 [2024-05-16 20:23:17.785337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.750 [2024-05-16 20:23:17.785360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.785375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.789001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.798407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.798797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.798832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.798850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.799104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.799349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.799372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.799387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.803014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.812416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.812805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.812836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.812868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.813113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.813357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.813381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.813395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.817023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.826420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.826795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.826826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.826843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.827095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.827340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.827363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.827378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.831006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.840406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.840778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.840809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.840826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.841077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.841328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.841352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.841367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.844992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.854384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.854771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.854802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.854818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.855069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.855314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.855338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.855352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.858978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.868375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.868763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.868793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.868810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.869060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.869306] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.869329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.869343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.872968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.751 [2024-05-16 20:23:17.882371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.751 [2024-05-16 20:23:17.882757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.751 [2024-05-16 20:23:17.882788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:30.751 [2024-05-16 20:23:17.882805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:30.751 [2024-05-16 20:23:17.883056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:30.751 [2024-05-16 20:23:17.883302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.751 [2024-05-16 20:23:17.883326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.751 [2024-05-16 20:23:17.883340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.751 [2024-05-16 20:23:17.886965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.011 [2024-05-16 20:23:17.896275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.011 [2024-05-16 20:23:17.896668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.011 [2024-05-16 20:23:17.896699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.011 [2024-05-16 20:23:17.896717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.011 [2024-05-16 20:23:17.896969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.011 [2024-05-16 20:23:17.897216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.011 [2024-05-16 20:23:17.897239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.011 [2024-05-16 20:23:17.897253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.011 [2024-05-16 20:23:17.900935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.011 [2024-05-16 20:23:17.910334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.011 [2024-05-16 20:23:17.910720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.011 [2024-05-16 20:23:17.910767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.011 [2024-05-16 20:23:17.910785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.011 [2024-05-16 20:23:17.911037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.011 [2024-05-16 20:23:17.911283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.011 [2024-05-16 20:23:17.911307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.011 [2024-05-16 20:23:17.911321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.011 [2024-05-16 20:23:17.914950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.011 [2024-05-16 20:23:17.924347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.011 [2024-05-16 20:23:17.924743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.011 [2024-05-16 20:23:17.924774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.011 [2024-05-16 20:23:17.924792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.011 [2024-05-16 20:23:17.925044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.011 [2024-05-16 20:23:17.925289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.011 [2024-05-16 20:23:17.925313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.011 [2024-05-16 20:23:17.925328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.011 [2024-05-16 20:23:17.928952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.011 [2024-05-16 20:23:17.938345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.011 [2024-05-16 20:23:17.938738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.011 [2024-05-16 20:23:17.938768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.011 [2024-05-16 20:23:17.938793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.011 [2024-05-16 20:23:17.939048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.011 [2024-05-16 20:23:17.939294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.011 [2024-05-16 20:23:17.939317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.011 [2024-05-16 20:23:17.939332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.011 [2024-05-16 20:23:17.942981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.011 [2024-05-16 20:23:17.952379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.011 [2024-05-16 20:23:17.952847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.011 [2024-05-16 20:23:17.952916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.011 [2024-05-16 20:23:17.952934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.011 [2024-05-16 20:23:17.953174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.011 [2024-05-16 20:23:17.953420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.011 [2024-05-16 20:23:17.953443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.011 [2024-05-16 20:23:17.953458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.011 [2024-05-16 20:23:17.957084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:17.966268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:17.966652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:17.966701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:17.966718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:17.966970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:17.967215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:17.967239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:17.967254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:17.970879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:17.980274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:17.980657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:17.980687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:17.980705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:17.980955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:17.981201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:17.981229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:17.981245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:17.984869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:17.994264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:17.994625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:17.994655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:17.994673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:17.994925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:17.995170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:17.995194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:17.995208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:17.998824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:18.008309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:18.008678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:18.008709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:18.008726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:18.008978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:18.009224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:18.009248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:18.009262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:18.012890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:18.022283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:18.022666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:18.022697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:18.022714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:18.022967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:18.023212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:18.023236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:18.023251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:18.026877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:18.036272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:18.036676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:18.036707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:18.036724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:18.036976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:18.037221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:18.037245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:18.037260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:18.040885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:18.050280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:18.050671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:18.050701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:18.050719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:18.050972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:18.051217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:18.051240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:18.051255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:18.054878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 [2024-05-16 20:23:18.064269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.012 [2024-05-16 20:23:18.064656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.012 [2024-05-16 20:23:18.064686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.012 [2024-05-16 20:23:18.064703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.012 [2024-05-16 20:23:18.064956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.012 [2024-05-16 20:23:18.065201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.012 [2024-05-16 20:23:18.065225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.012 [2024-05-16 20:23:18.065240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.012 [2024-05-16 20:23:18.068862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 315767 Killed "${NVMF_APP[@]}" "$@" 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=316721 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 316721 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 316721 ']' 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.012 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:31.013 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.013 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:31.013 20:23:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.013 [2024-05-16 20:23:18.078266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.013 [2024-05-16 20:23:18.078623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.013 [2024-05-16 20:23:18.078654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.013 [2024-05-16 20:23:18.078671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.013 [2024-05-16 20:23:18.078923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.013 [2024-05-16 20:23:18.079169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.013 [2024-05-16 20:23:18.079192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.013 [2024-05-16 20:23:18.079207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.013 [2024-05-16 20:23:18.082823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.013 [2024-05-16 20:23:18.092228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.013 [2024-05-16 20:23:18.092623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.013 [2024-05-16 20:23:18.092653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.013 [2024-05-16 20:23:18.092671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.013 [2024-05-16 20:23:18.092922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.013 [2024-05-16 20:23:18.093168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.013 [2024-05-16 20:23:18.093192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.013 [2024-05-16 20:23:18.093207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.013 [2024-05-16 20:23:18.096827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.013 [2024-05-16 20:23:18.106237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.013 [2024-05-16 20:23:18.106601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.013 [2024-05-16 20:23:18.106631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.013 [2024-05-16 20:23:18.106648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.013 [2024-05-16 20:23:18.106898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.013 [2024-05-16 20:23:18.107150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.013 [2024-05-16 20:23:18.107174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.013 [2024-05-16 20:23:18.107188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.013 [2024-05-16 20:23:18.110706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.013 [2024-05-16 20:23:18.119616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.013 [2024-05-16 20:23:18.119959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.013 [2024-05-16 20:23:18.119986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.013 [2024-05-16 20:23:18.120002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.013 [2024-05-16 20:23:18.120226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.013 [2024-05-16 20:23:18.120426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.013 [2024-05-16 20:23:18.120445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.013 [2024-05-16 20:23:18.120457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.013 [2024-05-16 20:23:18.122385] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:31.013 [2024-05-16 20:23:18.122456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.013 [2024-05-16 20:23:18.123645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.013 [2024-05-16 20:23:18.132999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.013 [2024-05-16 20:23:18.133411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.013 [2024-05-16 20:23:18.133438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.013 [2024-05-16 20:23:18.133453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.013 [2024-05-16 20:23:18.133691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.013 [2024-05-16 20:23:18.133919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.013 [2024-05-16 20:23:18.133940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.013 [2024-05-16 20:23:18.133952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.013 [2024-05-16 20:23:18.136970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.013 [2024-05-16 20:23:18.146419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.013 [2024-05-16 20:23:18.146795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.013 [2024-05-16 20:23:18.146837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.013 [2024-05-16 20:23:18.146859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.013 [2024-05-16 20:23:18.147107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.013 [2024-05-16 20:23:18.147331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.013 [2024-05-16 20:23:18.147351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.013 [2024-05-16 20:23:18.147363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.013 [2024-05-16 20:23:18.150393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.272 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.272 [2024-05-16 20:23:18.159796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.160182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.160209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.160225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.160483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.160749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.160772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.160786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.164392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.173795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.174300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.174331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.174349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.174590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.174834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.174869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.174900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.178432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.187653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.188037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.188065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.188080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.188326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.188572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.188595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.188610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.191942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.273 [2024-05-16 20:23:18.192203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.201662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.202200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.202255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.202277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.202530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.202780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.202806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.202824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.206481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.215816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.216260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.216292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.216309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.216560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.216807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.216842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.216868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.220509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.229528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.229870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.229899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.229916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.230147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.230362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.230383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.230397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.233590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.242913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.243366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.243402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.243418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.243671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.243947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.243969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.243983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.247591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.256671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.257145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.257176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.257195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.257443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.257702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.257727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.257745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.261119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.270748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.271417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.271455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.271476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.271725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.271992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.272016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.272033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.275604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.284569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.284967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.284996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.285012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.285264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.285518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.285543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.285560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.273 [2024-05-16 20:23:18.289221] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.273 [2024-05-16 20:23:18.298481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.273 [2024-05-16 20:23:18.298832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.273 [2024-05-16 20:23:18.298893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.273 [2024-05-16 20:23:18.298909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.273 [2024-05-16 20:23:18.299127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.273 [2024-05-16 20:23:18.299385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.273 [2024-05-16 20:23:18.299408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.273 [2024-05-16 20:23:18.299423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.303050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.312486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.312891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.312935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.312952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.313192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.313454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.313489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.313504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.315197] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.274 [2024-05-16 20:23:18.315241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.274 [2024-05-16 20:23:18.315255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.274 [2024-05-16 20:23:18.315267] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.274 [2024-05-16 20:23:18.315278] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.274 [2024-05-16 20:23:18.315369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.274 [2024-05-16 20:23:18.315527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.274 [2024-05-16 20:23:18.315531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.274 [2024-05-16 20:23:18.316906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.326193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.326704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.326750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.326770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.327006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.327245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.327267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.327284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.330630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.339924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.340434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.340471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.340491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.340741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.341001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.341025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.341042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.344406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.353594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.354088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.354128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.354148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.354393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.354613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.354635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.354651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.357957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.367345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.367785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.367820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.367859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.368087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.368342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.368363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.368380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.371644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.380993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.381491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.381530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.381549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.381791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.382041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.382064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.382082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.385369] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.394538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.395009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.395043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.395062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.395301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.395518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.395540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.395555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.398848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.274 [2024-05-16 20:23:18.408225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.274 [2024-05-16 20:23:18.408585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.274 [2024-05-16 20:23:18.408612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.274 [2024-05-16 20:23:18.408628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.274 [2024-05-16 20:23:18.408845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.274 [2024-05-16 20:23:18.409104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.274 [2024-05-16 20:23:18.409125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.274 [2024-05-16 20:23:18.409138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.274 [2024-05-16 20:23:18.412465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.534 [2024-05-16 20:23:18.421807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.534 [2024-05-16 20:23:18.422129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.534 [2024-05-16 20:23:18.422158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.534 [2024-05-16 20:23:18.422175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.422408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.422640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.422661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.422675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.425890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.435389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.435715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.435743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.435758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.435987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.436221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.436243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.436256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.439497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.448956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.449350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.449378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.449394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.449611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.449863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.449885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.449899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.453162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.462582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.462942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.462969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.462990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.463207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.463435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.463455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.463468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.466680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.476204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.476550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.476578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.476594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.476810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.477068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.477090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.477103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.480320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.489663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.490000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.490028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.490043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.490259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.490489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.490510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.490523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.493762] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.503287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.503654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.503681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.503697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.503920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.504141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.504166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.504195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.507368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.516879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.517184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.517212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.517227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.517443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.517672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.517693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.517706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.520948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.530421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.530764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.530791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.530806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.531031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.531264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.531285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.531298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.534525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.544044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.544406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.544433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.544449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.544665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.544924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.535 [2024-05-16 20:23:18.544946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.535 [2024-05-16 20:23:18.544960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.535 [2024-05-16 20:23:18.548166] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.535 [2024-05-16 20:23:18.557575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.535 [2024-05-16 20:23:18.557922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.535 [2024-05-16 20:23:18.557950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.535 [2024-05-16 20:23:18.557966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.535 [2024-05-16 20:23:18.558197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.535 [2024-05-16 20:23:18.558411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.558432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.558445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.561704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.571132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.571513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.571540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.571556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.571772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.572032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.572055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.572068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.575323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.584675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.585071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.585098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.585114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.585331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.585560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.585580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.585594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.588787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.598321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.598651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.598679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.598695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.598925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.599146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.599182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.599195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.602447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.611930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.612277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.612305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.612320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.612537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.612757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.612778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.612792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.616060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.625572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.625915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.625943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.625958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.626175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.626403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.626424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.626437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.629663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.639102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.639446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.639473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.639489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.639705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.639963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.639984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.640005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.643269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.652556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.652916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.652945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.652961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.653178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.653408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.653429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.536 [2024-05-16 20:23:18.653442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.536 [2024-05-16 20:23:18.656669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.536 [2024-05-16 20:23:18.666193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.536 [2024-05-16 20:23:18.666537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.536 [2024-05-16 20:23:18.666564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.536 [2024-05-16 20:23:18.666580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.536 [2024-05-16 20:23:18.666797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.536 [2024-05-16 20:23:18.667055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.536 [2024-05-16 20:23:18.667076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.537 [2024-05-16 20:23:18.667090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.537 [2024-05-16 20:23:18.670358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.679947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.680302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.680331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.680347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.796 [2024-05-16 20:23:18.680583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.796 [2024-05-16 20:23:18.680798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.796 [2024-05-16 20:23:18.680818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.796 [2024-05-16 20:23:18.680846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.796 [2024-05-16 20:23:18.684210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.693521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.693883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.693912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.693928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.796 [2024-05-16 20:23:18.694158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.796 [2024-05-16 20:23:18.694373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.796 [2024-05-16 20:23:18.694394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.796 [2024-05-16 20:23:18.694407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.796 [2024-05-16 20:23:18.697605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.707130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.707449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.707477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.707492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.796 [2024-05-16 20:23:18.707709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.796 [2024-05-16 20:23:18.707976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.796 [2024-05-16 20:23:18.707999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.796 [2024-05-16 20:23:18.708012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.796 [2024-05-16 20:23:18.711232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.720579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.720927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.720955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.720970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.796 [2024-05-16 20:23:18.721186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.796 [2024-05-16 20:23:18.721415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.796 [2024-05-16 20:23:18.721436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.796 [2024-05-16 20:23:18.721449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.796 [2024-05-16 20:23:18.724683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.734196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.734521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.734549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.734565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.796 [2024-05-16 20:23:18.734786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.796 [2024-05-16 20:23:18.735046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.796 [2024-05-16 20:23:18.735068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.796 [2024-05-16 20:23:18.735081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.796 [2024-05-16 20:23:18.738324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.747574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.747924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.747952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.747968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.796 [2024-05-16 20:23:18.748184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.796 [2024-05-16 20:23:18.748413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.796 [2024-05-16 20:23:18.748434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.796 [2024-05-16 20:23:18.748447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.796 [2024-05-16 20:23:18.751671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.796 [2024-05-16 20:23:18.761191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.796 [2024-05-16 20:23:18.761535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.796 [2024-05-16 20:23:18.761562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.796 [2024-05-16 20:23:18.761577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.761794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.762052] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.762074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.762088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.765307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.774646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.775014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.775042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.775058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.775274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.775504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.775524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.775541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.778783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.788313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.788638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.788666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.788681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.788906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.789128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.789149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.789176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.792404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.801896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.802266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.802294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.802309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.802525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.802755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.802775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.802788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.806035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.815383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.815726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.815754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.815769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.815996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.816231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.816253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.816267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.819535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.828978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.829323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.829354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.829371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.829587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.829817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.829860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.829875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.833159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.842560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.842900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.842928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.842944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.843175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.843391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.843411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.843424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.846671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.856186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.856532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.856560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.856575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.856792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.857051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.857073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.857086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.860334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.869607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.869996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.870024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.870039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.870256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.870482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.870504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.870518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.873769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.883260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.883602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.883629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.883645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.883869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.884090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.884111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.884125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.887370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.797 [2024-05-16 20:23:18.896733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.797 [2024-05-16 20:23:18.897077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.797 [2024-05-16 20:23:18.897104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.797 [2024-05-16 20:23:18.897120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.797 [2024-05-16 20:23:18.897352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.797 [2024-05-16 20:23:18.897566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.797 [2024-05-16 20:23:18.897587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.797 [2024-05-16 20:23:18.897600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.797 [2024-05-16 20:23:18.900843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.798 [2024-05-16 20:23:18.910473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.798 [2024-05-16 20:23:18.910842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.798 [2024-05-16 20:23:18.910876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.798 [2024-05-16 20:23:18.910892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.798 [2024-05-16 20:23:18.911109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.798 [2024-05-16 20:23:18.911338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.798 [2024-05-16 20:23:18.911359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.798 [2024-05-16 20:23:18.911372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.798 [2024-05-16 20:23:18.914666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.798 [2024-05-16 20:23:18.924098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.798 [2024-05-16 20:23:18.924397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.798 [2024-05-16 20:23:18.924425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.798 [2024-05-16 20:23:18.924440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.798 [2024-05-16 20:23:18.924657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.798 [2024-05-16 20:23:18.924890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.798 [2024-05-16 20:23:18.924912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.798 [2024-05-16 20:23:18.924926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.798 [2024-05-16 20:23:18.928165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.798 [2024-05-16 20:23:18.937741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.798 [2024-05-16 20:23:18.938092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.798 [2024-05-16 20:23:18.938120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:31.798 [2024-05-16 20:23:18.938136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:31.798 [2024-05-16 20:23:18.938353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:31.798 [2024-05-16 20:23:18.938574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.798 [2024-05-16 20:23:18.938600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.798 [2024-05-16 20:23:18.938630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:18.942026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:18.951332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:18.951669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:18.951697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:18.951713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:18.951939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:18.952176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:18.952197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:18.952211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:18.955414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:18.964915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:18.965264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:18.965293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:18.965314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:18.965532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:18.965762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:18.965783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:18.965796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:18.969040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:18.978561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:18.978886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:18.978915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:18.978930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:18.979148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:18.979378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:18.979399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:18.979413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:18.982656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:18.992204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:18.992542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:18.992569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:18.992585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:18.992802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:18.993062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:18.993084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:18.993098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:18.996375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:19.005823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:19.006204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:19.006232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:19.006247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:19.006464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:19.006694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:19.006720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:19.006734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:19.010004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:19.019360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:19.019730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:19.019758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:19.019773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:19.019998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:19.020234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:19.020255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:19.020268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:19.023466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:19.032821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:19.033158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:19.033185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:19.033201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:19.033418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:19.033648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:19.033668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:19.033681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:19.036951] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 [2024-05-16 20:23:19.046464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:19.046775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:19.046802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:19.046818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.058 [2024-05-16 20:23:19.047041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.058 [2024-05-16 20:23:19.047262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.058 [2024-05-16 20:23:19.047284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.058 [2024-05-16 20:23:19.047297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.058 [2024-05-16 20:23:19.050613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.058 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:32.058 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:24:32.058 20:23:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.058 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.058 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.058 [2024-05-16 20:23:19.060153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.058 [2024-05-16 20:23:19.060520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.058 [2024-05-16 20:23:19.060547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.058 [2024-05-16 20:23:19.060563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 [2024-05-16 20:23:19.060779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 [2024-05-16 20:23:19.061038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.061062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.061075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 [2024-05-16 20:23:19.064378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 [2024-05-16 20:23:19.073631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.059 [2024-05-16 20:23:19.074015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.059 [2024-05-16 20:23:19.074043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.059 [2024-05-16 20:23:19.074059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 [2024-05-16 20:23:19.074274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 [2024-05-16 20:23:19.074506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.074527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.074540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.059 [2024-05-16 20:23:19.077829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.059 [2024-05-16 20:23:19.080685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.059 [2024-05-16 20:23:19.087260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.059 [2024-05-16 20:23:19.087608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.059 [2024-05-16 20:23:19.087635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.059 [2024-05-16 20:23:19.087656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 [2024-05-16 20:23:19.087881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 [2024-05-16 20:23:19.088102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.088123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.088137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 [2024-05-16 20:23:19.091438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 [2024-05-16 20:23:19.100882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.059 [2024-05-16 20:23:19.101266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.059 [2024-05-16 20:23:19.101293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.059 [2024-05-16 20:23:19.101309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 [2024-05-16 20:23:19.101544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 [2024-05-16 20:23:19.101766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.101786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.101799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 [2024-05-16 20:23:19.105003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 [2024-05-16 20:23:19.114455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.059 [2024-05-16 20:23:19.114906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.059 [2024-05-16 20:23:19.114940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.059 [2024-05-16 20:23:19.114959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 [2024-05-16 20:23:19.115196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 [2024-05-16 20:23:19.115414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.115435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.115452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 [2024-05-16 20:23:19.118701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 [2024-05-16 20:23:19.128045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.059 Malloc0 00:24:32.059 [2024-05-16 20:23:19.128473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.059 [2024-05-16 20:23:19.128505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.059 [2024-05-16 20:23:19.128526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.059 [2024-05-16 20:23:19.128753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.059 [2024-05-16 20:23:19.128995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.129017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.129035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 [2024-05-16 20:23:19.132292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.059 [2024-05-16 20:23:19.141789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.059 [2024-05-16 20:23:19.142162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.059 [2024-05-16 20:23:19.142189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226fa30 with addr=10.0.0.2, port=4420 00:24:32.059 [2024-05-16 20:23:19.142206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fa30 is same with the state(5) to be set 00:24:32.059 [2024-05-16 20:23:19.142436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226fa30 (9): Bad file descriptor 00:24:32.059 [2024-05-16 20:23:19.142650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.059 [2024-05-16 20:23:19.142671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.059 [2024-05-16 20:23:19.142684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.059 [2024-05-16 20:23:19.146047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.059 [2024-05-16 20:23:19.147661] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:32.059 [2024-05-16 20:23:19.147988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.059 20:23:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 316054 00:24:32.059 [2024-05-16 20:23:19.155367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.317 [2024-05-16 20:23:19.277315] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:42.285 00:24:42.285 Latency(us) 00:24:42.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:42.285 Verification LBA range: start 0x0 length 0x4000 00:24:42.285 Nvme1n1 : 15.01 6134.95 23.96 10600.40 0.00 7623.39 843.47 19612.25 00:24:42.285 =================================================================================================================== 00:24:42.285 Total : 6134.95 23.96 10600.40 0.00 7623.39 843.47 19612.25 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.285 rmmod nvme_tcp 00:24:42.285 rmmod nvme_fabrics 00:24:42.285 rmmod nvme_keyring 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 316721 ']' 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 316721 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 316721 ']' 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 316721 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:24:42.285 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:42.286 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 316721 00:24:42.286 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:42.286 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:42.286 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 316721' 00:24:42.286 killing process with pid 316721 00:24:42.286 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 316721 00:24:42.286 [2024-05-16 20:23:27.942321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:42.286 20:23:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 316721 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.286 20:23:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.222 20:23:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.222 00:24:43.222 real 0m22.395s 00:24:43.222 user 0m59.576s 00:24:43.222 sys 0m4.467s 00:24:43.222 20:23:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:43.222 20:23:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.222 ************************************ 00:24:43.222 END TEST nvmf_bdevperf 00:24:43.222 ************************************ 00:24:43.222 20:23:30 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:43.222 20:23:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:43.222 20:23:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:43.222 20:23:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.222 ************************************ 00:24:43.222 START TEST nvmf_target_disconnect 00:24:43.222 ************************************ 00:24:43.222 20:23:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:43.479 * Looking for test storage... 00:24:43.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.480 20:23:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:45.382 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:45.382 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:45.382 Found net devices under 0000:09:00.0: cvl_0_0 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:45.382 Found net devices under 0000:09:00.1: cvl_0_1 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:24:45.382 00:24:45.382 --- 10.0.0.2 ping statistics --- 00:24:45.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.382 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:24:45.382 00:24:45.382 --- 10.0.0.1 ping statistics --- 00:24:45.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.382 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.382 20:23:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:45.383 ************************************ 00:24:45.383 START TEST nvmf_target_disconnect_tc1 00:24:45.383 ************************************ 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:45.383 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.641 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.641 [2024-05-16 20:23:32.575943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.641 [2024-05-16 20:23:32.576022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f8f90 with addr=10.0.0.2, port=4420 00:24:45.641 [2024-05-16 20:23:32.576064] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:45.641 [2024-05-16 20:23:32.576092] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:45.641 [2024-05-16 20:23:32.576108] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:45.641 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:45.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:45.641 Initializing NVMe Controllers 00:24:45.641 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:45.641 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:45.642 00:24:45.642 real 0m0.102s 00:24:45.642 user 0m0.046s 00:24:45.642 sys 0m0.055s 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:45.642 ************************************ 00:24:45.642 END TEST nvmf_target_disconnect_tc1 00:24:45.642 ************************************ 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:45.642 ************************************ 00:24:45.642 START TEST nvmf_target_disconnect_tc2 00:24:45.642 ************************************ 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=319870 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 319870 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 319870 ']' 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:45.642 20:23:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.642 [2024-05-16 20:23:32.693335] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:45.642 [2024-05-16 20:23:32.693411] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.642 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.642 [2024-05-16 20:23:32.758953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.900 [2024-05-16 20:23:32.873265] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.900 [2024-05-16 20:23:32.873324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.900 [2024-05-16 20:23:32.873352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.900 [2024-05-16 20:23:32.873363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.900 [2024-05-16 20:23:32.873373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.900 [2024-05-16 20:23:32.873664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:45.900 [2024-05-16 20:23:32.873695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:45.900 [2024-05-16 20:23:32.873751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:45.900 [2024-05-16 20:23:32.873754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.900 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 Malloc0 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 [2024-05-16 20:23:33.057878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 [2024-05-16 20:23:33.085874] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:46.157 [2024-05-16 20:23:33.086138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=319972 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:46.157 20:23:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.157 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.079 20:23:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 319870 00:24:48.079 20:23:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:48.079 Read completed with error (sct=0, sc=8) 00:24:48.079 starting I/O failed 00:24:48.079 Read completed with error (sct=0, sc=8) 00:24:48.079 starting I/O failed 00:24:48.079 Read completed with error (sct=0, sc=8) 00:24:48.079 starting I/O failed 00:24:48.079 Read completed with error (sct=0, sc=8) 00:24:48.079 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 [2024-05-16 20:23:35.111711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 [2024-05-16 20:23:35.112072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 [2024-05-16 20:23:35.112373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Write completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.080 Read completed with error (sct=0, sc=8) 00:24:48.080 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Read completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Read completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Read completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Read completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 Write completed with error (sct=0, sc=8) 00:24:48.081 starting I/O failed 00:24:48.081 [2024-05-16 20:23:35.112663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:48.081 [2024-05-16 20:23:35.112847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.112908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.113882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.113984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.114147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.114275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.114413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.114526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.114637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.114834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.114864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.115888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.115918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.116881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.116922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.117892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.117920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.118022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.081 [2024-05-16 20:23:35.118048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.081 qpair failed and we were unable to recover it. 00:24:48.081 [2024-05-16 20:23:35.118160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.118187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.118285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.118313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.118424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.118450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.118568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.118597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.118700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.118743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.118879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.118907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.119951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.119978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.120942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.120968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.121889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.121975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.122912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.122939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.123025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.123049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.123134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.123159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.123270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.123294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.123389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.123416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.123572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.123602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.082 qpair failed and we were unable to recover it. 00:24:48.082 [2024-05-16 20:23:35.123735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.082 [2024-05-16 20:23:35.123761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.123847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.123881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.123972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.123998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.124972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.124997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.125957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.125984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.126964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.126991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.127907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.127996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.128888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.128915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.083 [2024-05-16 20:23:35.129005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.083 [2024-05-16 20:23:35.129030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.083 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.129941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.129968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.130948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.130973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.131903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.131930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.132915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.132942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.133067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.133229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.133364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.133529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.133646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.084 [2024-05-16 20:23:35.133814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.084 qpair failed and we were unable to recover it. 00:24:48.084 [2024-05-16 20:23:35.133922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.133948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.134940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.134967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.135949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.135975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.136832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.136989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.137939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.137965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.138915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.138991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.139016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.139105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.139131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.139211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.139236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.139344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.085 [2024-05-16 20:23:35.139369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.085 qpair failed and we were unable to recover it. 00:24:48.085 [2024-05-16 20:23:35.139474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.139500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.139599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.139642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.139775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.139814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.139947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.139976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.140955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.140982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.141968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.141994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.142078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.142104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.142286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.142313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.142406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.142432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.142572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.142597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.142704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.142733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.142898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.142924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.143882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.143975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.144001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.144114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.144140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.144224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.144267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.144435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.144465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.144636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.144680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.144841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.144877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.144991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.145016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.145108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.145134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.145222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.145247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.145340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.145386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.145504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.145532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.086 [2024-05-16 20:23:35.145650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.086 [2024-05-16 20:23:35.145679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.086 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.145824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.145872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.145973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.146102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.146279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.146437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.146591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.146754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.146872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.146901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.147837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.147886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.148918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.148945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.149086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.149116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.149272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.149318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.149541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.149601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.149764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.149791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.149936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.149963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.150917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.150943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.151887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.151913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.152067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.152095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.152184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.152213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.152343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.152368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.087 qpair failed and we were unable to recover it. 00:24:48.087 [2024-05-16 20:23:35.152482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.087 [2024-05-16 20:23:35.152510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.152624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.152667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.152827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.152863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.153879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.153990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.154166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.154306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.154491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.154627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.154767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.154882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.154910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.155942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.156048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.156076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.156222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.156250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.156340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.156367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.156489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.156531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.156664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.156702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.156815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.156878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.157035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.157062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.157180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.157208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.157392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.157421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.088 [2024-05-16 20:23:35.157572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.088 [2024-05-16 20:23:35.157623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.088 qpair failed and we were unable to recover it. 00:24:48.092 [2024-05-16 20:23:35.157771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.092 [2024-05-16 20:23:35.157801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.092 qpair failed and we were unable to recover it. 00:24:48.092 [2024-05-16 20:23:35.157913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.092 [2024-05-16 20:23:35.157939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.092 qpair failed and we were unable to recover it. 00:24:48.092 [2024-05-16 20:23:35.158074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.092 [2024-05-16 20:23:35.158102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.092 qpair failed and we were unable to recover it. 00:24:48.092 [2024-05-16 20:23:35.158233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.092 [2024-05-16 20:23:35.158277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.092 qpair failed and we were unable to recover it. 00:24:48.092 [2024-05-16 20:23:35.158409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.158456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.158568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.158594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.158680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.158706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.158788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.158814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.158934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.158961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.159909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.159948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.160870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.160913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.161947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.161975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.162095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.162125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.162270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.162300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.162416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.162445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.162543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.162574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.162715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.162741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.162878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.162916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.163932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.163962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.164130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.164236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.164435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.164627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.164751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.164876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.164963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.165006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.165117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.165146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.165305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.165358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.165496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.165545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.165690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.093 [2024-05-16 20:23:35.165718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.093 qpair failed and we were unable to recover it. 00:24:48.093 [2024-05-16 20:23:35.165813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.165837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.165926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.165955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.166918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.166944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.167880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.167985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.168106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.168343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.168532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.168674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.168793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.168962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.168988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.169077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.169103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.169205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.169234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.169420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.169447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.169564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.169604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.169706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.169750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.169865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.169892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.170918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.170945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.171911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.171950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.172957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.172983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.094 qpair failed and we were unable to recover it. 00:24:48.094 [2024-05-16 20:23:35.173090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.094 [2024-05-16 20:23:35.173137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.173227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.173252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.173341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.173367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.173503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.173528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.173635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.173660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.173770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.173795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.173885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.173913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.174936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.174974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.175911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.175940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.176101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.176133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.176307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.176336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.176493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.176537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.176655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.176681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.176765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.176795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.176897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.176926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.177882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.177908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.178899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.178938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.179062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.179091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.179224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.179249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.095 qpair failed and we were unable to recover it. 00:24:48.095 [2024-05-16 20:23:35.179365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.095 [2024-05-16 20:23:35.179390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.179479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.179508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.179623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.179650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.179734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.179762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.179879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.179906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.179987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.180872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.180999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.181128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.181248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.181393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.181536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.181703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.181905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.181933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.182888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.182977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.183104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.183288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.183431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.183588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.183721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.183876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.183902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.184889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.184978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.185085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.185223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.185363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.185573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.185698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.185849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.185902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.186877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.186903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.187013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.187038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.096 [2024-05-16 20:23:35.187177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.096 [2024-05-16 20:23:35.187202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.096 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.187297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.187322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.187435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.187460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.187612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.187641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.187746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.187771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.187912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.187948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.188035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.188060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.188230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.188272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.188430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.188480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.188602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.188631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.188731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.188756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.188896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.188922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.189911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.189939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.190077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.190192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.190418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.190543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.190700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.190866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.190980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.191898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.191939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.192881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.192992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.193905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.193990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.097 qpair failed and we were unable to recover it. 00:24:48.097 [2024-05-16 20:23:35.194917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.097 [2024-05-16 20:23:35.194943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.195948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.195973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.196879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.196905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.197848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.197913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.198842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.198878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.199864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.199890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.200881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.200982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.201973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.098 [2024-05-16 20:23:35.201998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.098 qpair failed and we were unable to recover it. 00:24:48.098 [2024-05-16 20:23:35.202087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.202114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.202215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.202260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.202401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.202428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.202513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.202539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.202662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.202701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.202818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.202844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.202979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.203914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.203940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.204967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.204996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.205937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.205966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.206901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.206998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.207898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.207989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.208129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.208269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.208432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.208574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.208741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.208890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.099 [2024-05-16 20:23:35.208916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.099 qpair failed and we were unable to recover it. 00:24:48.099 [2024-05-16 20:23:35.209027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.209943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.209969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.210051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.210075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.210184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.210210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.100 [2024-05-16 20:23:35.210323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.100 [2024-05-16 20:23:35.210349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.100 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.210438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.210465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.210574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.210623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.210765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.210790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.210904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.210930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.211903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.211930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.212875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.212991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.213810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.213985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.214943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.214968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.390 [2024-05-16 20:23:35.215822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.390 qpair failed and we were unable to recover it. 00:24:48.390 [2024-05-16 20:23:35.215930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.215956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.216940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.216965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.217891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.217916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.218899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.218927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.219947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.219973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.220940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.220965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.221073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.221099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.221213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.391 [2024-05-16 20:23:35.221240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.391 qpair failed and we were unable to recover it. 00:24:48.391 [2024-05-16 20:23:35.221328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.221354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.221447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.221472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.221621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.221659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.221781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.221809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.221900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.221927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.222883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.222912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.223881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.223907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.224949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.224976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.225949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.225975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.226085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.226111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.226224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.226249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.226395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.226422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.226514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.226540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.226666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.226694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.392 qpair failed and we were unable to recover it. 00:24:48.392 [2024-05-16 20:23:35.226802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.392 [2024-05-16 20:23:35.226830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.226971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.226997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.227971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.227996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.228966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.228993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.229909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.229936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.230882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.230908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.231931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.231958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.232063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.232089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.232167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.232192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.232304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.393 [2024-05-16 20:23:35.232330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.393 qpair failed and we were unable to recover it. 00:24:48.393 [2024-05-16 20:23:35.232440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.232466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.232560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.232587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.232665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.232691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.232777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.232802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.232889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.232915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.233871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.233987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.234919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.234945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.235889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.235915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.394 [2024-05-16 20:23:35.236875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.394 [2024-05-16 20:23:35.236904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.394 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.236999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.237886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.237912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.238915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.238941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.239912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.239940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.240890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.240978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.241930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.241956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.242036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.242061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.242150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.242177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.242291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.242316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.395 [2024-05-16 20:23:35.242450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.395 [2024-05-16 20:23:35.242489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.395 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.242611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.242639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.242721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.242748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.242866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.242894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.242994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.243829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.243962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.244025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.244206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.244237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.244361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.244390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.244541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.244591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.244723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.244750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.244832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.244864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.244973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.245152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.245337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.245462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.245570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.245693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.245865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.245903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.246909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.246942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.247117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.247290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.247446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.247596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.247746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.247891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.247986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.248117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.248297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.248419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.248538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.248704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.396 [2024-05-16 20:23:35.248825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.396 [2024-05-16 20:23:35.248866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.396 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.249937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.249963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.250861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.250983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.251185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.251322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.251503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.251629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.251812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.251957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.251983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.252942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.252969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.253784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.253810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.254016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.254062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.254196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.254243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.254373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.254422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.254611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.254637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.254758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.254787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.254871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.254917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.397 qpair failed and we were unable to recover it. 00:24:48.397 [2024-05-16 20:23:35.255039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.397 [2024-05-16 20:23:35.255068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.255154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.255182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.255329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.255358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.255447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.255475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.255646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.255691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.255821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.255866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.255991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.256100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.256296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.256453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.256646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.256829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.256949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.256974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.257889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.257922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.258888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.258916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.259030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.259055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.259250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.259275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.259377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.259402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.259493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.259521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.259719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.259745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.259835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.259870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.260007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.260037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.260154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.260184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.260291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.260317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.260402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.260430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.260546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.398 [2024-05-16 20:23:35.260572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.398 qpair failed and we were unable to recover it. 00:24:48.398 [2024-05-16 20:23:35.260698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.260737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.260863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.260916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.261064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.261184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.261405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.261620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.261741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.261891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.261998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.262907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.262933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.263928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.263955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.264037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.264063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.264265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.264294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.264413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.264537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.264564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.264756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.264784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.264887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.264930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.265926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.265952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.399 [2024-05-16 20:23:35.266923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.399 [2024-05-16 20:23:35.266950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.399 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.267116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.267259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.267401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.267619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.267732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.267863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.267982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.268088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.268231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.268419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.268543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.268693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.268847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.268904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.269904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.269982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.270885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.270987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.271890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.271917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.272887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.272913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.400 [2024-05-16 20:23:35.273022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.400 [2024-05-16 20:23:35.273047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.400 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.273972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.273997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.274943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.274969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.275052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.275095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.275247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.275274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.275362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.275394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.275487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.275515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.275671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.275713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.275834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.275881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.276908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.276936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.277148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.277176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.277265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.277295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.277444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.277472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.277611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.277639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.277759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.277786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.277921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.277947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.278863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.278889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.279004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.279029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.279132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.401 [2024-05-16 20:23:35.279161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.401 qpair failed and we were unable to recover it. 00:24:48.401 [2024-05-16 20:23:35.279276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.279304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.279490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.279533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.279640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.279672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.279805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.279848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.279995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.280180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.280302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.280438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.280607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.280778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.280935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.280961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.281935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.281975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.282066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.282095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.282297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.282342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.282480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.282524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.282612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.282641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.282729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.282756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.282866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.282893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.283971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.283998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.284114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.284139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.284225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.284252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.284343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.284368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.284479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.284504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.284611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.402 [2024-05-16 20:23:35.284637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.402 qpair failed and we were unable to recover it. 00:24:48.402 [2024-05-16 20:23:35.284721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.284746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.284833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.284865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.284951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.284977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.285097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.285126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.285286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.285319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.285418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.285449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.285563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.285591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.285714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.285747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.285866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.285923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.286923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.286950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.287047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.287086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.287194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.287224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.287375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.287422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.287593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.287641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.287747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.287772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.287969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.287996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.288945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.288971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.289903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.289990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.290129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.290265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.290385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.290514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.290674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.290814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.403 [2024-05-16 20:23:35.290866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.403 qpair failed and we were unable to recover it. 00:24:48.403 [2024-05-16 20:23:35.291003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.291122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.291241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.291427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.291547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.291715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.291882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.291911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.292015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.292049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.292175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1797bb0 is same with the state(5) to be set 00:24:48.404 [2024-05-16 20:23:35.292319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.292348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.292469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.292517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.292610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.292639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.292741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.292767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.292869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.292901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.293830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.293978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.294902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.294928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.295860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.295916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.296828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.296874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.404 [2024-05-16 20:23:35.297010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.404 [2024-05-16 20:23:35.297050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.404 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.297139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.297167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.297270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.297300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.297407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.297440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.297637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.297685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.297786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.297813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.297927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.297954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.298929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.298957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.299082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.299114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.299266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.299312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.299443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.299493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.299575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.299601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.299710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.299736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.299872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.299916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.300857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.300900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.301835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.301870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.302909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.302936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.303049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.303075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.405 [2024-05-16 20:23:35.303201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.405 [2024-05-16 20:23:35.303230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.405 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.303349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.303378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.303464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.303494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.303587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.303616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.303736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.303764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.303866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.303912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.304051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.304162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.304325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.304490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.304673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.304813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.304975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.305890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.305979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.306845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.306978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.307951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.307978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.308063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.308090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.308196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.308223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.308314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.308343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.406 [2024-05-16 20:23:35.308453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.406 [2024-05-16 20:23:35.308485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.406 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.308587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.308625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.308744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.308770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.308868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.308896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.308977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.309122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.309251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.309406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.309574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.309759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.309923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.309952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.310956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.310985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.311973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.311998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.312947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.312972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.313906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.313932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.314044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.314072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.314169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.314197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.314298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.314327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.314431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.314459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.407 [2024-05-16 20:23:35.314562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.407 [2024-05-16 20:23:35.314607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.407 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.314694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.314719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.314822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.314849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.314966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.314992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.315942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.315970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.316932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.316959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.317909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.317937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.318886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.318917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.319864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.319982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.320009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.408 qpair failed and we were unable to recover it. 00:24:48.408 [2024-05-16 20:23:35.320112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.408 [2024-05-16 20:23:35.320137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.320232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.320257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.320390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.320418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.320541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.320569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.320664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.320692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.320791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.320817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.320908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.320934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.321880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.321907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.322951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.322978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.323119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.323256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.323385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.323508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.323663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.323826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.323974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.324963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.324991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.325082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.325108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.325190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.325234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.325370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.325419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.325535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.325583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.325705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.325749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.325866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.325892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.326010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.326035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.409 qpair failed and we were unable to recover it. 00:24:48.409 [2024-05-16 20:23:35.326147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.409 [2024-05-16 20:23:35.326172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.326357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.326403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.326539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.326585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.326676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.326706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.326860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.326916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.327032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.327157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.327330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.327541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.327674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.327838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.327979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.328121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.328246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.328410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.328571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.328765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.328924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.328950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.329959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.329984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.330109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.330137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.330283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.330311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.330428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.330456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.330578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.330606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.330762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.330786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.330895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.330921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.331008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.331033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.331178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.331236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.331366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.331411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.331614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.331657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.331769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.331795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.331892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.331920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.410 [2024-05-16 20:23:35.332052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.410 [2024-05-16 20:23:35.332096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.410 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.332279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.332328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.332462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.332497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.332624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.332651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.332742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.332769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.332859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.332886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.332980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.333921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.333947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.334934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.334965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.335933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.335960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.336093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.336283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.336407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.336555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.336703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.336864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.336999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.337110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.337285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.337431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.337607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.337754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.337897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.337924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.338001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.338026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.338109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.411 [2024-05-16 20:23:35.338135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.411 qpair failed and we were unable to recover it. 00:24:48.411 [2024-05-16 20:23:35.338238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.338266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.338414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.338443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.338562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.338590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.338682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.338710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.338835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.338867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.338953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.338980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.339957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.339996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.340948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.340977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.341849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.341881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.342053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.342208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.342358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.342510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.342624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.342861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.342969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.343874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.343908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.344046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.344072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.344193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.412 [2024-05-16 20:23:35.344236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.412 qpair failed and we were unable to recover it. 00:24:48.412 [2024-05-16 20:23:35.344355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.344381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.344503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.344528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.344640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.344666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.344784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.344809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.344905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.344931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.345934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.345973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.346951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.346977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.347865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.347892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.348870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.348897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.349939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.349966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.350054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.350080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.350164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.413 [2024-05-16 20:23:35.350191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.413 qpair failed and we were unable to recover it. 00:24:48.413 [2024-05-16 20:23:35.350325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.350351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.350465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.350492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.350582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.350620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.350746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.350773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.350884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.350929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.351922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.351966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.352836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.352970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.353009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.353179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.353207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.353336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.353364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.353506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.353534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.353713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.353758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.353873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.353900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.354968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.354994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.355123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.355167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.355252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.355280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.355392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.355419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.355539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.355565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.414 [2024-05-16 20:23:35.355674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.414 [2024-05-16 20:23:35.355700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.414 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.355797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.355825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.355962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.356134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.356288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.356437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.356590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.356735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.356924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.356954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.357887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.357913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.358047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.358089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.358217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.358261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.358386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.358433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.358519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.358545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.358680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.358705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.358919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.358964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.359878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.359906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.360941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.360966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.415 qpair failed and we were unable to recover it. 00:24:48.415 [2024-05-16 20:23:35.361805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.415 [2024-05-16 20:23:35.361830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.361945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.361972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.362086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.362112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.362234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.362260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.363903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.363931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.364892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.364919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.365888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.365917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.366890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.366920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.367914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.367942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.368038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.416 [2024-05-16 20:23:35.368064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.416 qpair failed and we were unable to recover it. 00:24:48.416 [2024-05-16 20:23:35.368176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.368308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.368417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.368532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.368667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.368800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.368924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.368951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.369963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.369989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.370933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.370959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.371912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.371940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.372954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.372981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.373064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.373090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.373183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.373209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.373297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.373341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.373433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.373461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.373580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.417 [2024-05-16 20:23:35.373609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.417 qpair failed and we were unable to recover it. 00:24:48.417 [2024-05-16 20:23:35.373761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.373786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.373880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.373907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.373996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.374958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.374983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.375859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.375885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.376920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.376946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.377938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.377968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.378060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.378086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.378174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.378200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.378317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.378344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.378471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.378513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.378624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.378650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.418 qpair failed and we were unable to recover it. 00:24:48.418 [2024-05-16 20:23:35.378756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.418 [2024-05-16 20:23:35.378781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.378866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.378893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.378983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.379907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.379932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.380877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.380993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.381115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.381237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.381384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.381636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.381767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.381911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.381941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.382889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.382917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.383918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.383948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.384061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.384097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.384226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.384252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.384331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.384357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.384436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.384462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.384549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.384575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.419 [2024-05-16 20:23:35.384769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.419 [2024-05-16 20:23:35.384795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.419 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.384911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.384938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.385933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.385959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.386892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.386986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.387888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.387915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.388843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.388976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.389958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.389986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.420 [2024-05-16 20:23:35.390896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.420 qpair failed and we were unable to recover it. 00:24:48.420 [2024-05-16 20:23:35.390977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.391921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.391949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.392082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.392247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.392427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.392593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.392747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.392887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.392974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.393914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.393997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.394139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.394281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.394400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.394598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.394736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.394887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.394914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.395903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.395929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.396961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.396986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.397075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.397102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.397257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.421 [2024-05-16 20:23:35.397285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.421 qpair failed and we were unable to recover it. 00:24:48.421 [2024-05-16 20:23:35.397372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.397399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.397523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.397548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.397627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.397653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.397745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.397770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.397864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.397907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.398908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.398936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.399107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.399283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.399450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.399583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.399743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.399891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.399979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.400950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.400976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.401105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.401151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.401260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.401286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.401435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.401461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.401598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.401624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.401708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.401734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.401848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.401880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.402914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.402941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.403030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.403055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.422 [2024-05-16 20:23:35.403143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.422 [2024-05-16 20:23:35.403169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.422 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.403293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.403319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.403395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.403421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.403505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.403532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.403635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.403660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.403776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.403801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.403889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.403916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.404878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.404912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.405896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.405980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.406881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.406917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.407909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.407935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.408949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.408975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.409059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.409086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.409203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.409229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.409344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.423 [2024-05-16 20:23:35.409370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.423 qpair failed and we were unable to recover it. 00:24:48.423 [2024-05-16 20:23:35.409460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.409487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.409599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.409625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.409700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.409724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.409832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.409865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.409955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.409980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.410900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.410927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.411967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.411994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.412126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.412151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.412279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.412307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.412434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.412462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.412608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.412635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.412760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.412788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.412923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.412950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.413941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.413970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.414946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.414975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.415894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.415926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.416014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.416040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.416146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.416172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.416281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.416307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.416391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.424 [2024-05-16 20:23:35.416419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.424 qpair failed and we were unable to recover it. 00:24:48.424 [2024-05-16 20:23:35.416530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.416556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.416648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.416672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.416755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.416780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.416904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.416932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.417965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.417990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.418089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.418132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.418252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.418277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.418405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.418433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.418593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.418638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.418753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.418779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.418874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.418901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.419869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.419988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.420116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.420290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.420446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.420612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.420751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.420889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.420924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.421869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.421897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.422035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.422208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.422340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.422522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.422676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.422829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.422994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.423033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.423157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.423184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.423266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.423292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.423409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.423435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.423542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.423568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.425 [2024-05-16 20:23:35.423698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.425 [2024-05-16 20:23:35.423724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.425 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.423838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.423873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.423977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.424845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.424976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.425973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.425998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.426086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.426128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.426225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.426253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.426394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.426421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.426513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.426540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.426717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.426757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.426881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.426910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.427881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.427988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.428889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.428985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.429011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.429122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.429148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.429238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.429263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.429344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.429369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.429454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.426 [2024-05-16 20:23:35.429480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.426 qpair failed and we were unable to recover it. 00:24:48.426 [2024-05-16 20:23:35.429582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.429608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.429701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.429728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.429846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.429880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.429990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.430967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.430995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.431097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.431125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.431275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.431303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.431392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.431420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.431580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.431627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.431740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.431767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.431879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.431906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.432902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.432928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.433927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.433952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.434912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.434937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.435070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.435224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.435369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.435491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.435662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.435884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.435985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.436157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.436334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.436488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.436638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.436753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.436896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.436922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.437016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.427 [2024-05-16 20:23:35.437042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.427 qpair failed and we were unable to recover it. 00:24:48.427 [2024-05-16 20:23:35.437174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.437301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.437413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.437529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.437636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.437777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.437894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.437922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.438846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.438988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.439128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.439241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.439412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.439563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.439693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.439815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.439843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.440883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.440931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.441911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.441937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.442959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.442984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.443140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.443168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.443284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.443312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.443432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.443460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.443566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.443610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.443696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.443724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.443849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.443899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.444009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.444052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.444136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.444163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.444311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.444340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.428 qpair failed and we were unable to recover it. 00:24:48.428 [2024-05-16 20:23:35.444453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.428 [2024-05-16 20:23:35.444482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.444605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.444632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.444727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.444755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.444895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.444920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.445059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.445240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.445391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.445547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.445656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.445878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.445972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.446895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.446982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.447129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.447251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.447420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.447579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.447775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.447923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.447950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.448062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.448092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.448248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.448297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.448429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.448471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.448607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.448633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.448715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.448741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.448868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.448895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.449887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.449977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.450004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.450084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.429 [2024-05-16 20:23:35.450111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.429 qpair failed and we were unable to recover it. 00:24:48.429 [2024-05-16 20:23:35.450185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.450211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.450319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.450345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.450457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.450484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.450571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.450597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.450730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.450756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.450864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.450890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.450984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.451867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.451990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.452136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.452292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.452397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.452537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.452676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.452829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.452886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.453016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.453046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.453228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.453273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.453373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.453402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.453531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.453575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.453710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.453736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.453829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.453875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.454042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.454077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.454209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.454250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.454341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.454371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.454457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.454486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.454612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.454640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.430 [2024-05-16 20:23:35.454762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.430 [2024-05-16 20:23:35.454789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.430 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.454871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.454901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.455078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.455324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.455473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.455620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.455768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.455882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.455987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.456130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.456290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.456412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.456594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.456765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.456887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.456913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.457973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.457998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.458921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.458951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.459869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.459896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.460048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.460081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.460245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.460271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.460365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.431 [2024-05-16 20:23:35.460390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.431 qpair failed and we were unable to recover it. 00:24:48.431 [2024-05-16 20:23:35.460502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.460527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.460623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.460650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.460740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.460766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.460887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.460940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.461865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.461980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.462935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.462962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.463096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.463216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.463355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.463468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.463665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.463822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.463984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.464880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.464986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.465829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.432 [2024-05-16 20:23:35.465970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.432 [2024-05-16 20:23:35.466014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.432 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.466142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.466184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.466337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.466367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.466470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.466498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.466590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.466618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.466750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.466779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.466928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.466955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.467093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.467210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.467384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.467607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.467751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.467871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.467983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.468151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.468301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.468477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.468603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.468741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.468898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.468926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.469916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.469942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.470107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.470252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.470446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.470577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.470708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.470876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.470998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.471031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.471176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.471206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.471332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.471361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.471486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.471515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.471677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.471709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.471838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.471875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.472034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.433 [2024-05-16 20:23:35.472060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.433 qpair failed and we were unable to recover it. 00:24:48.433 [2024-05-16 20:23:35.472166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.472194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.473868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.473913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.474950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.474976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.475127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.475154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.475287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.475318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.475468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.475498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.475662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.475692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.475845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.475881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.476017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.476043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.476138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.476182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.476302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.476332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.478867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.478914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.479873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.434 [2024-05-16 20:23:35.479901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.434 qpair failed and we were unable to recover it. 00:24:48.434 [2024-05-16 20:23:35.480021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.480135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.480275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.480442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.480585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.480731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.480888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.480919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.481071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.481212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.481413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.481543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.481704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.481845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.481970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.482128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.482241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.482397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.482546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.482722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.482849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.482888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.484868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.484902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.485119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.485147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.485301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.485332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.485475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.485502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.485645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.485672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.485777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.485806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.485977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.486006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.486108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.486135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.486313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.486341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.486497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.486524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.486622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.486649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.486795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.486822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.487053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.487081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.487219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.487263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.487382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.487408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.487527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.487554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.487668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.487694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.487843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.487885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.488027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.488053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.435 [2024-05-16 20:23:35.488173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.435 [2024-05-16 20:23:35.488199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.435 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.489871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.489907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.490963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.490991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.491931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.491957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.492058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.492089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.492243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.492271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.492351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.492378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.494868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.494904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.495050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.495078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.495198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.495225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.495328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.495358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.495497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.495525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.495664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.495691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.495881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.495911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.496966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.496993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.497973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.436 [2024-05-16 20:23:35.497999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.436 qpair failed and we were unable to recover it. 00:24:48.436 [2024-05-16 20:23:35.498104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.498262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.498370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.498520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.498652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.498785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.498908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.498933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.499886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.499912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.500904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.500930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.501129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.501261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.501370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.501550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.501739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.501878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.501982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.502940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.502965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.503083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.503110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.503197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.503223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.503334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.503359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.503495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.503542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.503666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.437 [2024-05-16 20:23:35.503699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.437 qpair failed and we were unable to recover it. 00:24:48.437 [2024-05-16 20:23:35.503815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.503841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.503965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.503991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.504903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.504934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.505043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.505069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.505183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.505214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.505356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.505391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.505527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.505553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.438 [2024-05-16 20:23:35.505643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.438 [2024-05-16 20:23:35.505669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.438 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.505759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.505785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.505877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.505903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.506902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.506928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.507884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.507914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.508846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.508895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.509899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.509983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.510008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.510091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.510116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.730 qpair failed and we were unable to recover it. 00:24:48.730 [2024-05-16 20:23:35.510192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.730 [2024-05-16 20:23:35.510217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.510308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.510333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.510445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.510489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.510597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.510623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.510733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.510758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.510866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.510897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.511901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.511927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.512878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.512905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.513879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.513905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.514035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.514074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.514226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.514265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.514410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.514440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.514537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.514566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.514671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.514702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.514844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.514881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.515011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.515068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.515254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.515286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.515413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.515446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.515572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.731 [2024-05-16 20:23:35.515616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.731 qpair failed and we were unable to recover it. 00:24:48.731 [2024-05-16 20:23:35.515744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.515772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.515904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.515930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.516849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.516951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.517880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.517977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.518850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.518881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.519863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.519889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.520032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.520058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.520168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.520194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.520282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.520309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.520412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.520438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.520553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.520579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.732 [2024-05-16 20:23:35.520715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.732 [2024-05-16 20:23:35.520740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.732 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.520825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.520850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.520937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.520962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.521892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.521977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.522934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.522959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.523876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.523988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.524883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.524910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.525021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.525047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.525131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.525157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.525247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.525272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.525380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.525406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.525487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.733 [2024-05-16 20:23:35.525516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.733 qpair failed and we were unable to recover it. 00:24:48.733 [2024-05-16 20:23:35.525631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.525656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.525736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.525762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.525904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.525931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.526928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.526954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.527862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.527975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.528968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.528995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.529099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.529138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.529293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.529320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.529403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.734 [2024-05-16 20:23:35.529430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.734 qpair failed and we were unable to recover it. 00:24:48.734 [2024-05-16 20:23:35.529512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.529537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.529645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.529670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.529780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.529806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.529927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.529953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.530959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.530984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.531098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.531125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.531204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.531231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.531342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.531368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.531483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.531508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.531627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.531656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.531814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.531859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.532840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.532984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.533908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.533998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.534023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.534123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.534148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.534259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.534285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.534372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.534397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.534503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.735 [2024-05-16 20:23:35.534543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.735 qpair failed and we were unable to recover it. 00:24:48.735 [2024-05-16 20:23:35.534650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.534688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.534814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.534841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.534955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.534987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.535964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.535989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.536864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.536892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.537905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.537940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.538912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.538946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.539045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.539072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.539153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.539178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.539258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.539285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.539366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.539391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.539467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.539491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.736 qpair failed and we were unable to recover it. 00:24:48.736 [2024-05-16 20:23:35.539603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.736 [2024-05-16 20:23:35.539628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.539708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.539733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.539870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.539896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.539980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.540943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.540969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.541972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.541998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.542112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.542283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.542441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.542556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.542725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.542881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.542985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.543948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.543975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.544092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.544119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.544223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.544249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.544334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.544359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.544472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.544498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.544605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.737 [2024-05-16 20:23:35.544633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.737 qpair failed and we were unable to recover it. 00:24:48.737 [2024-05-16 20:23:35.544716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.544741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.544850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.544884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.545941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.545967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.546944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.546972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.547877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.547994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.548910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.548938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.738 [2024-05-16 20:23:35.549889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.738 [2024-05-16 20:23:35.549914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.738 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.549993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.550970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.550997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.551834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.551994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.552957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.552984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.739 [2024-05-16 20:23:35.553946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.739 [2024-05-16 20:23:35.553973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.739 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.554891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.554918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.555891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.555919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.556865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.556904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.557912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.557938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.558863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.558889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.559008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.559032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.559116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.740 [2024-05-16 20:23:35.559141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.740 qpair failed and we were unable to recover it. 00:24:48.740 [2024-05-16 20:23:35.559220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.559352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.559472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.559617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.559733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.559846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.559958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.559983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.560874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.560994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.561937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.561962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.562868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.562907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.563913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.563939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.564027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.564057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.564145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.564171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.564282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.741 [2024-05-16 20:23:35.564307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.741 qpair failed and we were unable to recover it. 00:24:48.741 [2024-05-16 20:23:35.564389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.564414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.564528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.564553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.564658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.564682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.564767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.564796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.564882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.564907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.564994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.565832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.565979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.566943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.566968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.567809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.567847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.568925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.568951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.569033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.569059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.569177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.569202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.569313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.569339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.742 [2024-05-16 20:23:35.569434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.742 [2024-05-16 20:23:35.569460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.742 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.569559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.569597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.569745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.569772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.569885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.569912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.569998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.570848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.570995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.571888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.571915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.572862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.572890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.573932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.743 [2024-05-16 20:23:35.573959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.743 qpair failed and we were unable to recover it. 00:24:48.743 [2024-05-16 20:23:35.574075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.574919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.574945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.575965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.575991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.576903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.576992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.577965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.577990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.578102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.578237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.578374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.578509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.578675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.578838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.578988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.579030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.744 qpair failed and we were unable to recover it. 00:24:48.744 [2024-05-16 20:23:35.579174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.744 [2024-05-16 20:23:35.579213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.579306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.579333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.579410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.579436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.579531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.579558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.579641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.579667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.579777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.579804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.579921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.579950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.580858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.580886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.581960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.581987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.582928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.582954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.583941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.583968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.745 [2024-05-16 20:23:35.584085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.745 [2024-05-16 20:23:35.584111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.745 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.584227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.584370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.584508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.584643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.584747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.584870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.584989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.585969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.585995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.586876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.586902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.587924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.587951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.588970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.588997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.589088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.589115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.589227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.589253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.589336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.589362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.589475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.589501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.589632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.746 [2024-05-16 20:23:35.589660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.746 qpair failed and we were unable to recover it. 00:24:48.746 [2024-05-16 20:23:35.589744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.589769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.589862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.589889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.590872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.590978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.591947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.591972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.592968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.592999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.593906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.593944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.594045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.594072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.594184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.594210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.594323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.594349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.594457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.594483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.594587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.747 [2024-05-16 20:23:35.594612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.747 qpair failed and we were unable to recover it. 00:24:48.747 [2024-05-16 20:23:35.594691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.594718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.594810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.594836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.594945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.594970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.595967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.595992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.596915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.596955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.597906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.597933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.598966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.598992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.599103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.599128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.748 [2024-05-16 20:23:35.599207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.748 [2024-05-16 20:23:35.599232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.748 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.599319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.599344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.599431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.599457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.599573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.599601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.599741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.599769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.599861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.599889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.599973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.599999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.600909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.600992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.601966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.601991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.602912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.602937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.603865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.603990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.604016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.604127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.604152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.604290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.604316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.604431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.604456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.749 qpair failed and we were unable to recover it. 00:24:48.749 [2024-05-16 20:23:35.604562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.749 [2024-05-16 20:23:35.604587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.604678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.604704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.604840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.604873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.604983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.605095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.605231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.605375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.605488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.605681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.605843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.605880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.606929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.606955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.607872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.607906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.608012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.608038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.608121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.608147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.608234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.608260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.608346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.608372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.608458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.608483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.750 [2024-05-16 20:23:35.608595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-05-16 20:23:35.608622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.750 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.608733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.608763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.608870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.608896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.608991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.609911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.609939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.610956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.610982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.611929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.611955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.612920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.612946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.613879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.613906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.614011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.614039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.751 qpair failed and we were unable to recover it. 00:24:48.751 [2024-05-16 20:23:35.614127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-05-16 20:23:35.614154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.614929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.614955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.615894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.615920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.616878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.616992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.617892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.617920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.752 [2024-05-16 20:23:35.618730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.752 [2024-05-16 20:23:35.618756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.752 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.618874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.618908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.618990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.619890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.619998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.620954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.620982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.621892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.621922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.622972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.622998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.623900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.623927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.624017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.624043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.753 qpair failed and we were unable to recover it. 00:24:48.753 [2024-05-16 20:23:35.624121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.753 [2024-05-16 20:23:35.624147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.624948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.624974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.625900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.625983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.626886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.626976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.627920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.627945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.628946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.628980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.629937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.754 [2024-05-16 20:23:35.629962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.754 qpair failed and we were unable to recover it. 00:24:48.754 [2024-05-16 20:23:35.630038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.630885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.630911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.631872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.631896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.632870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.632897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.633910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.633998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.634965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.634991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.635913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.635939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.755 [2024-05-16 20:23:35.636054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.755 [2024-05-16 20:23:35.636079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.755 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.636882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.636908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.637907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.637999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.638874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.638903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.639893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.639975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.640001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.640086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.640111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.640196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.640223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.640313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.640339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.640455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.640546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.756 [2024-05-16 20:23:35.640571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.756 qpair failed and we were unable to recover it. 00:24:48.756 [2024-05-16 20:23:35.640650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.640674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.640759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.640783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.640904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.640929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.641949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.641975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.642896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.642929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.643931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.643959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.644909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.644936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.645913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.645938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.757 [2024-05-16 20:23:35.646023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.757 [2024-05-16 20:23:35.646049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.757 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.646904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.646931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.647945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.647970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.648888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.648918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.649972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.649999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.650967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.650993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.758 [2024-05-16 20:23:35.651844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.758 [2024-05-16 20:23:35.651878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.758 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.651979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.652848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.652887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.653909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.653936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.654868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.654994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.655873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.655899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.656928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.656954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.657032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.657057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.657142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.657168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.657246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.657271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.657360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.657386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.759 [2024-05-16 20:23:35.657460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.759 [2024-05-16 20:23:35.657485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.759 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.657591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.657616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.657695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.657720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.657837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.657874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.657971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.657996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.658928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.658954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.659873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.659900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.660920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.660947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.661952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.661978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.662862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.662891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.760 [2024-05-16 20:23:35.663787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.760 qpair failed and we were unable to recover it. 00:24:48.760 [2024-05-16 20:23:35.663905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.663931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.664859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.664885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.665889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.665978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.666954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.666979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.667960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.667987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.668963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.668991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.669862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.669890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.761 [2024-05-16 20:23:35.670003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.761 [2024-05-16 20:23:35.670029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.761 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.670929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.670954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.671924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.671952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.672913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.672940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.673891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.673980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.674942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.674968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.675890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.675918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.762 [2024-05-16 20:23:35.676863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.762 [2024-05-16 20:23:35.676890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.762 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.676965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.676990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.677945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.677971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.678960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.678984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.679952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.679979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.680959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.680985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.681962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.681988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.682069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.682095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.682207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.763 [2024-05-16 20:23:35.682234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.763 qpair failed and we were unable to recover it. 00:24:48.763 [2024-05-16 20:23:35.682369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.682397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.682517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.682543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.682629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.682654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.682739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.682765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.682861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.682888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.682988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.683916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.683943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.684913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.684939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.685942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.685968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.686918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.686944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.687867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.687998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.764 [2024-05-16 20:23:35.688793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.764 qpair failed and we were unable to recover it. 00:24:48.764 [2024-05-16 20:23:35.688883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.688909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.689873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.689912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.690946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.690973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.691876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.691907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.692939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.692965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.693948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.693976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.694935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.694966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.765 [2024-05-16 20:23:35.695091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.765 [2024-05-16 20:23:35.695118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.765 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.695918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.695944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.696888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.696998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.697113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.697250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.697359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.697527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.697756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.697915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.697944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.698927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.698954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.699871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.699897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.700858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.700977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.701003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.701084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.701109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.766 qpair failed and we were unable to recover it. 00:24:48.766 [2024-05-16 20:23:35.701200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.766 [2024-05-16 20:23:35.701226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.701309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.701334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.701443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.701469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.701578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.701604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.701708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.701747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.701834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.701867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.701955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.701981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.702953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.702979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.703894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.703920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.704955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.704981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.705901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.705986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.706896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.706991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.767 qpair failed and we were unable to recover it. 00:24:48.767 [2024-05-16 20:23:35.707829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.767 [2024-05-16 20:23:35.707859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.707970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.707996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.708865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.708976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.709898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.709924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.710893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.710982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.711950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.711976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.712907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.712935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.713888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.713916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.714935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.714960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.715055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.768 [2024-05-16 20:23:35.715082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.768 qpair failed and we were unable to recover it. 00:24:48.768 [2024-05-16 20:23:35.715168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.715287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.715424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.715563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.715678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.715821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.715954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.715980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.716907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.716988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.717898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.717927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.718907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.718994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.719888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.719914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.720887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.720977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.769 qpair failed and we were unable to recover it. 00:24:48.769 [2024-05-16 20:23:35.721832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.769 [2024-05-16 20:23:35.721868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.721984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.722968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.722996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.723875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.723902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.724882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.724998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.725115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.725228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.725374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.725493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.725629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.770 [2024-05-16 20:23:35.725738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.770 [2024-05-16 20:23:35.725762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.770 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.725866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.725893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.725974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.726861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.726887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.727965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.727991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.728940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.728966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.729105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.729131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.729245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.729272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.729360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.729387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.771 qpair failed and we were unable to recover it. 00:24:48.771 [2024-05-16 20:23:35.729501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.771 [2024-05-16 20:23:35.729527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.729642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.729668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.729758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.729784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.729866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.729892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.729979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.730912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.730938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.731912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.731939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.732896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.732921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.733010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.733035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.733136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.733161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.733248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.733273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.733353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.733378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.772 qpair failed and we were unable to recover it. 00:24:48.772 [2024-05-16 20:23:35.733467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.772 [2024-05-16 20:23:35.733493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.733607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.733632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.733746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.733771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.733866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.733893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.733986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.734900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.734992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.735951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.735977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.736969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.736995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.737929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.737955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.738040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.738066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.738168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.738194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.773 [2024-05-16 20:23:35.738280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.773 [2024-05-16 20:23:35.738307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.773 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.738398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.738424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.738532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.738558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.738645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.738671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.738814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.738839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.738946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.738972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.739898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.739987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.740918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.740944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.741926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.741953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.774 [2024-05-16 20:23:35.742779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.774 qpair failed and we were unable to recover it. 00:24:48.774 [2024-05-16 20:23:35.742863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.742898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.742988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.743869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.743983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.744942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.744968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.745897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.745923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.746006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.746032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.746108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.746134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.746240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.746266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.746374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.746400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.775 [2024-05-16 20:23:35.746526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.775 [2024-05-16 20:23:35.746552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.775 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.746633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.746658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.746746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.746772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.746856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.746882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.746992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.747897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.747979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.748972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.748997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.749868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.749985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.750973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.750998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.776 qpair failed and we were unable to recover it. 00:24:48.776 [2024-05-16 20:23:35.751086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.776 [2024-05-16 20:23:35.751112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.751247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.751362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.751484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.751587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.751761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.751893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.751976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.752959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.752990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.753859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.753980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.754899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.754990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.777 [2024-05-16 20:23:35.755742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.777 qpair failed and we were unable to recover it. 00:24:48.777 [2024-05-16 20:23:35.755828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.755858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.755963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.755988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.756932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.756959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.757903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.757929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.758899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.758998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.759970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.759995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.760109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.778 [2024-05-16 20:23:35.760134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.778 qpair failed and we were unable to recover it. 00:24:48.778 [2024-05-16 20:23:35.760242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.760267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.760353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.760379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.760496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.760522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.760610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.760636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.760724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.760750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.760839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.760872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.761956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.761982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.762895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.762991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.779 qpair failed and we were unable to recover it. 00:24:48.779 [2024-05-16 20:23:35.763969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.779 [2024-05-16 20:23:35.763994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.764863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.764903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.765871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.765900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.766972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.766999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.767932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.767958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.768072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.780 [2024-05-16 20:23:35.768097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.780 qpair failed and we were unable to recover it. 00:24:48.780 [2024-05-16 20:23:35.768176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.768291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.768396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.768498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.768631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.768743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.768868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.768897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.769969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.769994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.770903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.770994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.771913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.771939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.772933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.781 [2024-05-16 20:23:35.772959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.781 qpair failed and we were unable to recover it. 00:24:48.781 [2024-05-16 20:23:35.773068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.773181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.773317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.773451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.773596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.773706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.773864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.773903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.774949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.774976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.775868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.775917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.776926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.776965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.777064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.777091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.777211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.777237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.782 [2024-05-16 20:23:35.777368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.782 [2024-05-16 20:23:35.777392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.782 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.777515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.777539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.777622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.777657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.777765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.777790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.777882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.777915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.778904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.778930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.779869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.779978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.780958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.780992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.781941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.781967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.782052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.782078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.783 qpair failed and we were unable to recover it. 00:24:48.783 [2024-05-16 20:23:35.782184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.783 [2024-05-16 20:23:35.782209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.782299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.782325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.782415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.782439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.782524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.782548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.782655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.782680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.782759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.782784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.782889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.782929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.783952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.783978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.784974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.784999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.785090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.785116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.785260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.785287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.785381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.785407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.785494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.785520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.784 qpair failed and we were unable to recover it. 00:24:48.784 [2024-05-16 20:23:35.785609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.784 [2024-05-16 20:23:35.785635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.785725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.785751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.785890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.785916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.786958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.786984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.787972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.787998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.788086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.788112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.788223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.785 [2024-05-16 20:23:35.788249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.785 qpair failed and we were unable to recover it. 00:24:48.785 [2024-05-16 20:23:35.788359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.788386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.788476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.788504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.788624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.788650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.788767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.788793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.788883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.788910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.789933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.789959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.790836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.790974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.786 qpair failed and we were unable to recover it. 00:24:48.786 [2024-05-16 20:23:35.791946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.786 [2024-05-16 20:23:35.791975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.792934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.792962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.793928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.793954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.794958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.794983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.795069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.795094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.795177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.795203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.795288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.795313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.795401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.795426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.795513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.795538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.787 [2024-05-16 20:23:35.795644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.787 [2024-05-16 20:23:35.795668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.787 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.795774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.795802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.795890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.795918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.796861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.796977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.797961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.797987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.798907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.798988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.799014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.799098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.799123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.788 qpair failed and we were unable to recover it. 00:24:48.788 [2024-05-16 20:23:35.799224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.788 [2024-05-16 20:23:35.799249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.799380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.799405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.799515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.799541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.799625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.799650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.799758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.799783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.799889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.799915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.799993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.800883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.800993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.801952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.801987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.789 [2024-05-16 20:23:35.802913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.789 qpair failed and we were unable to recover it. 00:24:48.789 [2024-05-16 20:23:35.802986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.803869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.803895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.804899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.804925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.805926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.805952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.806070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.806097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.806209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.806234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.806374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.806399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.806520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.790 [2024-05-16 20:23:35.806547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.790 qpair failed and we were unable to recover it. 00:24:48.790 [2024-05-16 20:23:35.806646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.806674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.806760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.806786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.806878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.806904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.806986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.807898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.807924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.808926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.808952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.809061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.809086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.809204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.791 [2024-05-16 20:23:35.809235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.791 qpair failed and we were unable to recover it. 00:24:48.791 [2024-05-16 20:23:35.809318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.809344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.809434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.809461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.809547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.809574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.809703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.809742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.809864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.809891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.810865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.810982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.811919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.811957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.792 qpair failed and we were unable to recover it. 00:24:48.792 [2024-05-16 20:23:35.812875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.792 [2024-05-16 20:23:35.812902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.812985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.813904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.813989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.814890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.814976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.815898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.815986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.816013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.816113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.816140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.816254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.793 [2024-05-16 20:23:35.816279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.793 qpair failed and we were unable to recover it. 00:24:48.793 [2024-05-16 20:23:35.816393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.816419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.816510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.816536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.816642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.816667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.816784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.816809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.816930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.816956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.817952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.817979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.818885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.818995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.794 [2024-05-16 20:23:35.819943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.794 [2024-05-16 20:23:35.819968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.794 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.820875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.820902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.821913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.821941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.822956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.822983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.823072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.823097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.823189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.823215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.823355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.823380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.823519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.795 [2024-05-16 20:23:35.823545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.795 qpair failed and we were unable to recover it. 00:24:48.795 [2024-05-16 20:23:35.823668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.823693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.823835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.823868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.823989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.824847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.824991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.825841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.825977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.826965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.826996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.827112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.827248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.827383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.827546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.827714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.796 [2024-05-16 20:23:35.827858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.796 qpair failed and we were unable to recover it. 00:24:48.796 [2024-05-16 20:23:35.827980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.828963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.828988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.829912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.829939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.830902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.830997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.831904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.831931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.832016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.797 [2024-05-16 20:23:35.832042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.797 qpair failed and we were unable to recover it. 00:24:48.797 [2024-05-16 20:23:35.832131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.832251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.832387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.832525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.832660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.832828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.832965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.832997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.833882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.833909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.834897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.834925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.835957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.835983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.836929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.836956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.837049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.837074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.798 qpair failed and we were unable to recover it. 00:24:48.798 [2024-05-16 20:23:35.837161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.798 [2024-05-16 20:23:35.837187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.837296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.837329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.837465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.837496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.837598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.837625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.837707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.837733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.837830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.837862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.838004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.838031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.838119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.838145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.838878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.838906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.838998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.839944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.839986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.840948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.840975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.841061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.841090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.841201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.841232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.841314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.841339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.841427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.841452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:48.799 [2024-05-16 20:23:35.841527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.799 [2024-05-16 20:23:35.841552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:48.799 qpair failed and we were unable to recover it. 00:24:49.094 [2024-05-16 20:23:35.841623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.094 [2024-05-16 20:23:35.841648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.094 qpair failed and we were unable to recover it. 00:24:49.094 [2024-05-16 20:23:35.841723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.094 [2024-05-16 20:23:35.841749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.094 qpair failed and we were unable to recover it. 00:24:49.094 [2024-05-16 20:23:35.841829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.094 [2024-05-16 20:23:35.841863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.094 qpair failed and we were unable to recover it. 00:24:49.094 [2024-05-16 20:23:35.841944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.094 [2024-05-16 20:23:35.841969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.094 qpair failed and we were unable to recover it. 00:24:49.094 [2024-05-16 20:23:35.842052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.842913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.842940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.843927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.843954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.844970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.844996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.845118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.845143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.845261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.845288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.845375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.845400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.095 [2024-05-16 20:23:35.845514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.095 [2024-05-16 20:23:35.845540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.095 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.845650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.845676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.845768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.845795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.845890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.845935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.846061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.846094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.846216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.846253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.846366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.846399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.846518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.846551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.846669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.846698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.846814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.846859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.847942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.847967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.848945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.848971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.849081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.849107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.849188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.849213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.096 [2024-05-16 20:23:35.849304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.096 [2024-05-16 20:23:35.849331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.096 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.849445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.849471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.849571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.849598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.849687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.849712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.849791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.849816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.849936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.849962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.850860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.850887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.851891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.851976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.852082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.852193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.852312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.852418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.852553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.097 qpair failed and we were unable to recover it. 00:24:49.097 [2024-05-16 20:23:35.852664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.097 [2024-05-16 20:23:35.852689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.852779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.852807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.852899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.852925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.853936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.853977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.854950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.854976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.098 qpair failed and we were unable to recover it. 00:24:49.098 [2024-05-16 20:23:35.855957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.098 [2024-05-16 20:23:35.855982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.856906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.856932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.857909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.857982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.858870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.858985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.859011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.859099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.859125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.099 [2024-05-16 20:23:35.859212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.099 [2024-05-16 20:23:35.859237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.099 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.859348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.859373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.859457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.859482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.859566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.859591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.859678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.859705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.859795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.859821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.859968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.859994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.860967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.860993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.861904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.861930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.862033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.862071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.862160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.862186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.862275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.862301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.862414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.862439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.862526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.100 [2024-05-16 20:23:35.862551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.100 qpair failed and we were unable to recover it. 00:24:49.100 [2024-05-16 20:23:35.862636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.862661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.862758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.862788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.862914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.862942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.863970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.863995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.864922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.864948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.101 [2024-05-16 20:23:35.865036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.101 [2024-05-16 20:23:35.865061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.101 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.865970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.865996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.866888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.866921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.867900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.867986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.868012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.868093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.868119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.868227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.868252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.102 [2024-05-16 20:23:35.868361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.102 [2024-05-16 20:23:35.868386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.102 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.868496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.868523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.868616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.868644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.868760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.868785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.868904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.868931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.869939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.869965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.870897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.870923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.871034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.871145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.871312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.871427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.871549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.103 [2024-05-16 20:23:35.871685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.103 qpair failed and we were unable to recover it. 00:24:49.103 [2024-05-16 20:23:35.871783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.871818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.871927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.871955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.872945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.872973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.873945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.873970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.874914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.874943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.875035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.104 [2024-05-16 20:23:35.875061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.104 qpair failed and we were unable to recover it. 00:24:49.104 [2024-05-16 20:23:35.875186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.875212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.875336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.875364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.875476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.875501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.875612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.875639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.875749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.875787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.875904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.875932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.876883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.876997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.877905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.877991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.878016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.878102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.878127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.878239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.878264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.878352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.878378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.878462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.878487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.105 [2024-05-16 20:23:35.878574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.105 [2024-05-16 20:23:35.878599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.105 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.878689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.878714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.878830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.878862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.878980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.879893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.879919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.880870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.880986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.881894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.881921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.882004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.882030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.106 qpair failed and we were unable to recover it. 00:24:49.106 [2024-05-16 20:23:35.882155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.106 [2024-05-16 20:23:35.882181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.882319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.882345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.882431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.882457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.882538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.882563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.882644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.882669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.882785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.882810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.882901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.882928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.883806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.883974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.107 qpair failed and we were unable to recover it. 00:24:49.107 [2024-05-16 20:23:35.884810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.107 [2024-05-16 20:23:35.884839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.884944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.884971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.885931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.885959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.886849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.886993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.887901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.887931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.888026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.888052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.888141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.888167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.888254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.108 [2024-05-16 20:23:35.888279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.108 qpair failed and we were unable to recover it. 00:24:49.108 [2024-05-16 20:23:35.888361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.888386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.888508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.888534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.888647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.888674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.888763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.888789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.888897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.888928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.889885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.889997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.890951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.890976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.891060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.891085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.891170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.891195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.891281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.891307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.891398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.891423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.891502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.891527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.109 [2024-05-16 20:23:35.891638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.109 [2024-05-16 20:23:35.891664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.109 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.891746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.891771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.891895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.891934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.892896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.892924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.893910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.893998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.894952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.110 [2024-05-16 20:23:35.894977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.110 qpair failed and we were unable to recover it. 00:24:49.110 [2024-05-16 20:23:35.895058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.895937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.895964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.896957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.896983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.897918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.897995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.898020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.898111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.898136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.898246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.898271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.898361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.898388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.111 [2024-05-16 20:23:35.898471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.111 [2024-05-16 20:23:35.898496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.111 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.898574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.898600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.898687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.898713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.898826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.898858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.898953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.898979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.899909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.899988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.900881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.900908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.901946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.901972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.902055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.902080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.902188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.902213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.112 qpair failed and we were unable to recover it. 00:24:49.112 [2024-05-16 20:23:35.902303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.112 [2024-05-16 20:23:35.902329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.902446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.902473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.902557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.902585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.902681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.902707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.902822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.902858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.902951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.902977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.903847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.903993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.904862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.904982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.905007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.905095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.905122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.905204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.905230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.113 qpair failed and we were unable to recover it. 00:24:49.113 [2024-05-16 20:23:35.905308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.113 [2024-05-16 20:23:35.905333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.905446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.905471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.905561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.905590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.905680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.905706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.905794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.905821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.905920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.905946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.906935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.906962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.907900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.907929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.908904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.908930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.114 [2024-05-16 20:23:35.909022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.114 [2024-05-16 20:23:35.909051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.114 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.909949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.909976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.910889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.910982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.911894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.911979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.912895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.912922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.913051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.115 [2024-05-16 20:23:35.913076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.115 qpair failed and we were unable to recover it. 00:24:49.115 [2024-05-16 20:23:35.913156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.913300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.913409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.913542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.913639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.913758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.913886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.913912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.914940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.914967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.915945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.915971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.916901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.916926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.917003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.116 [2024-05-16 20:23:35.917029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.116 qpair failed and we were unable to recover it. 00:24:49.116 [2024-05-16 20:23:35.917110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.917893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.917981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.918872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.918980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.919894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.919976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.920934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.117 [2024-05-16 20:23:35.920965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.117 qpair failed and we were unable to recover it. 00:24:49.117 [2024-05-16 20:23:35.921078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.921913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.921941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.922964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.922989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.923948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.923975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.924931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.924957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.118 qpair failed and we were unable to recover it. 00:24:49.118 [2024-05-16 20:23:35.925066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.118 [2024-05-16 20:23:35.925091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.925913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.925938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.926888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.926915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.927931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.927970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.928068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.119 [2024-05-16 20:23:35.928095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.119 qpair failed and we were unable to recover it. 00:24:49.119 [2024-05-16 20:23:35.928179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.928898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.928982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.929942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.929968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.930932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.930959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.931932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.931960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.932052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.932077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.120 [2024-05-16 20:23:35.932192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.120 [2024-05-16 20:23:35.932224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.120 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.932315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.932341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.932430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.932455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.932563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.932589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.932675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.932702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.932786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.932812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.932907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.932935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.933923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.933948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.934904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.934988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.935871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.935897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.936007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.936032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.936111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.936137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.936221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.936246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.936354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.936379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.936486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.121 [2024-05-16 20:23:35.936512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.121 qpair failed and we were unable to recover it. 00:24:49.121 [2024-05-16 20:23:35.936590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.936615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.936690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.936715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.936834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.936886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.936983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.937893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.937986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.938104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.938272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.938411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.938578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.938713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.938866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.938905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.939967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.939993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.940952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.940977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.941064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.941088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.941191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.122 [2024-05-16 20:23:35.941216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.122 qpair failed and we were unable to recover it. 00:24:49.122 [2024-05-16 20:23:35.941296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.941321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.941405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.941431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.941517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.941544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.941624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.941650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.941771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.941796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.941899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.941925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.942918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.942945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.943842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.943986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.944949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.944975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.945880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.945909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.946879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.946917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.947066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.947183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.947297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.947432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.947537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.123 [2024-05-16 20:23:35.947674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.123 qpair failed and we were unable to recover it. 00:24:49.123 [2024-05-16 20:23:35.947773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.947799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.947892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.947933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.948893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.948920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.949937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.949963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.950911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.950937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.951969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.951995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.952934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.952965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.953052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.953077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.953157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.953184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.953291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.953316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.124 qpair failed and we were unable to recover it. 00:24:49.124 [2024-05-16 20:23:35.953405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.124 [2024-05-16 20:23:35.953431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.953516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.953542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.953650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.953676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.953760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.953786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.953874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.953901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.954938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.954964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.955909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.955937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.956870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.956896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.957902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.957935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.958881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.958908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.125 [2024-05-16 20:23:35.959823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.125 qpair failed and we were unable to recover it. 00:24:49.125 [2024-05-16 20:23:35.959917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.959943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.960890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.960975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.961974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.961999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.962888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.962916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.963935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.963961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.964926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.964954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.965929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.965956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.126 qpair failed and we were unable to recover it. 00:24:49.126 [2024-05-16 20:23:35.966804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.126 [2024-05-16 20:23:35.966829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.966947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.966973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.967880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.967908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.968892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.968919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.969917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.969991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.970942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.970968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.971913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.971938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.972841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.127 [2024-05-16 20:23:35.972973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.127 [2024-05-16 20:23:35.973012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.127 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.973186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.973354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.973500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.973642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.973756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.973895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.973978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.974880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.974997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.975837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.975989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.976964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.976990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.977865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.977904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.978952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.978978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.128 [2024-05-16 20:23:35.979849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.128 [2024-05-16 20:23:35.979880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.128 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.980910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.980937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.981840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.981873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.982877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.982903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.983871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.983909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.984961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.984986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.985873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.985991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.986871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.986980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.987005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.987098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.987124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.987207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.987232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.987320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.987347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.129 [2024-05-16 20:23:35.987445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.129 [2024-05-16 20:23:35.987472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.129 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.987586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.987612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.987748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.987774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.987882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.987910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.988927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.988954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.989901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.989990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.990889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.990978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.991883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.991909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.992871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.992898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.993008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.993034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.993123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.993149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.993229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.993255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.993335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.993361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.130 [2024-05-16 20:23:35.993463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.130 [2024-05-16 20:23:35.993488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.130 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.993570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.993594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.993679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.993705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.993798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.993825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.993912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.993938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.994954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.994982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.995123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.995149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.995236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.995262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.995400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.995426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.995562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.995587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.995702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.995727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.995857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.995884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.996830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.996967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.997877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.997989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.998960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.998988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:35.999972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:35.999997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.000926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.000954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.001043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.001068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.131 [2024-05-16 20:23:36.001183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.131 [2024-05-16 20:23:36.001210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.131 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.001321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.001347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.001431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.001457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.001601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.001629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.001711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.001738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.001815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.001841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.001966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.001992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.002917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.002947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.003916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.003942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.004877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.004916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.005895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.005977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.006947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.006972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.007898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.007924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.008002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.008027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.132 [2024-05-16 20:23:36.008109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.132 [2024-05-16 20:23:36.008134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.132 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.008281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.008429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.008536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.008640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.008750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.008913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.008997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.009876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.009903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.010885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.010912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.011861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.011887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.012899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.012985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.013923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.013950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.014039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.014064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.014212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.014238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.014321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.014345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.014456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.014482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.014568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.014593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.133 qpair failed and we were unable to recover it. 00:24:49.133 [2024-05-16 20:23:36.014682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.133 [2024-05-16 20:23:36.014708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.014825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.014858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.014980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.015946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.015985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.016932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.016958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.017956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.017982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.018857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.018883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.019914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.019986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.020901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.020992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.021972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.021997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.022104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.022129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.022209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.022234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.022368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.022393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.022474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.134 [2024-05-16 20:23:36.022498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.134 qpair failed and we were unable to recover it. 00:24:49.134 [2024-05-16 20:23:36.022600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.022625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.022712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.022741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.022838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.022883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.022999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.023889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.023915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.024973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.024999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.025921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.025947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.026972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.026999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.027911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.027937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.028968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.028994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.029931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.029957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.030042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.030067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.135 [2024-05-16 20:23:36.030169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.135 [2024-05-16 20:23:36.030195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.135 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.030334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.030359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.030448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.030473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.030590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.030619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.030729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.030768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.030865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.030893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.031908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.031934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.032915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.032943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.033861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.033982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.034884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.034999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.035941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.035968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.036890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.036916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.037000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.037025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.037140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.037167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.037243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.037268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.136 [2024-05-16 20:23:36.037378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.136 [2024-05-16 20:23:36.037405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.136 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.037519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.037545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.037687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.037713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.037800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.037825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.037946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.037973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.038876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.038905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.039872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.039989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.040967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.040994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.041909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.041937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.042908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.042935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.043959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.043984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.044949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.044975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.045063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.045088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.045227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.137 [2024-05-16 20:23:36.045253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.137 qpair failed and we were unable to recover it. 00:24:49.137 [2024-05-16 20:23:36.045349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.045376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.045466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.045494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.045613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.045638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.045729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.045755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.045837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.045868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.045959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.045984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.046897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.046922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.047957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.047986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.048890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.048930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.049899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.049926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.050901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.050928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.051899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.051926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.052917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.052948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.053027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.053052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.053139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.053164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.053252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.053277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.138 qpair failed and we were unable to recover it. 00:24:49.138 [2024-05-16 20:23:36.053364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.138 [2024-05-16 20:23:36.053391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.053512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.053540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.053652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.053678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.053791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.053816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.053932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.053958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.054949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.054975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.055948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.055976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.056878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.056904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.057953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.057979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.058951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.058977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.059109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.059135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.059233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.059258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.059342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.139 [2024-05-16 20:23:36.059367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.139 qpair failed and we were unable to recover it. 00:24:49.139 [2024-05-16 20:23:36.059442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.059467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.059581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.059605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.059744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.059770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.059875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.059901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.059988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.060936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.060961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.061889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.061996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.062021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.062138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.062163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.062302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.062330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.062446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.062471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.062563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.062588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.140 qpair failed and we were unable to recover it. 00:24:49.140 [2024-05-16 20:23:36.062694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.140 [2024-05-16 20:23:36.062720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.062804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.062829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.062948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.062975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.063883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.063997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.064959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.064984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.065874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.065900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.066014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.066040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.141 qpair failed and we were unable to recover it. 00:24:49.141 [2024-05-16 20:23:36.066141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.141 [2024-05-16 20:23:36.066166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.066954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.066979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.067897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.067981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.068959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.068985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.069101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.069126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.069239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.069265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.069380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.069406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.069521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.142 [2024-05-16 20:23:36.069546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.142 qpair failed and we were unable to recover it. 00:24:49.142 [2024-05-16 20:23:36.069654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.069679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.069790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.069815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.069923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.069949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.070890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.070995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.071883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.071909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.072860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.072887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.143 qpair failed and we were unable to recover it. 00:24:49.143 [2024-05-16 20:23:36.073000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.143 [2024-05-16 20:23:36.073026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.073929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.073955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.074954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.074982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.075898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.075990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.076016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.076099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.076125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.076208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.076233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.144 qpair failed and we were unable to recover it. 00:24:49.144 [2024-05-16 20:23:36.076319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.144 [2024-05-16 20:23:36.076344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.076424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.076449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.076531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.076556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.076693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.076718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.076834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.076867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.076982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.077892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.077920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.078032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.078058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.078164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.078189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.078302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.078327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.078457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.078482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.078573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.145 [2024-05-16 20:23:36.078598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.145 qpair failed and we were unable to recover it. 00:24:49.145 [2024-05-16 20:23:36.078686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.078711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.078849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.078881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.078995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.079916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.079943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.080907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.080933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.081932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.081958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.146 qpair failed and we were unable to recover it. 00:24:49.146 [2024-05-16 20:23:36.082042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.146 [2024-05-16 20:23:36.082067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.082933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.082959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.083914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.083941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.084929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.084956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.085097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.085123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.085261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.085286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.085400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.147 [2024-05-16 20:23:36.085426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.147 qpair failed and we were unable to recover it. 00:24:49.147 [2024-05-16 20:23:36.085540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.085568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.085712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.085738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.085822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.085847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.085970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.085996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.086873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.086987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.087872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.087972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.088128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.088267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.088413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.088577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.088718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.088845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.088891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.148 qpair failed and we were unable to recover it. 00:24:49.148 [2024-05-16 20:23:36.089003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.148 [2024-05-16 20:23:36.089029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.089867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.089895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.090907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.090933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.091969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.091995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.092973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.149 [2024-05-16 20:23:36.092999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.149 qpair failed and we were unable to recover it. 00:24:49.149 [2024-05-16 20:23:36.093088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.093905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.093997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.094906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.094933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.095864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.095890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.096957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.096982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.097096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.097123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.097217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.097242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.097322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.097347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.097462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.097486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.097569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.150 [2024-05-16 20:23:36.097594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.150 qpair failed and we were unable to recover it. 00:24:49.150 [2024-05-16 20:23:36.097708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.097733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.097816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.097841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.097966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.097991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.098891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.098918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.099837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.099979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.100886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.100914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.151 [2024-05-16 20:23:36.101918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.151 [2024-05-16 20:23:36.101944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.151 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.102903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.102928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.103867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.103987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.104910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.104937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.105861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.105978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.106862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.152 [2024-05-16 20:23:36.106976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.152 [2024-05-16 20:23:36.107003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.152 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.107856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.107884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.108962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.108987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.109896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.109923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.110915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.110941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.153 [2024-05-16 20:23:36.111956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.153 [2024-05-16 20:23:36.111982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.153 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.112880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.112999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.113891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.113975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.114968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.114993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.115968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.115993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.116128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.116239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.116373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.116515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.116631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.154 [2024-05-16 20:23:36.116749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.154 qpair failed and we were unable to recover it. 00:24:49.154 [2024-05-16 20:23:36.116866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.116894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.116982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.117879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.117993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.118897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.118979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.119946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.119971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.120910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.120937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.121020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.121045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.121134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.121160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.121246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.121272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.121381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.121406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.155 qpair failed and we were unable to recover it. 00:24:49.155 [2024-05-16 20:23:36.121514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.155 [2024-05-16 20:23:36.121539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.121626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.121653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.121748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.121775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.121889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.121915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.122965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.122990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.123849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.123885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.124891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.124997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.125022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.125108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.125134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.125219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.125244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.125366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.125391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.125472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.125496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.156 qpair failed and we were unable to recover it. 00:24:49.156 [2024-05-16 20:23:36.125629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.156 [2024-05-16 20:23:36.125655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.125744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.125770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.125877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.125903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.125982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.126958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.126997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.127951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.127976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.128895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.128921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.157 [2024-05-16 20:23:36.129786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.157 qpair failed and we were unable to recover it. 00:24:49.157 [2024-05-16 20:23:36.129896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.129924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.130909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.130936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.131861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.131888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.132953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.132980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.133916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.133942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.134059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.134086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.134196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.134221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.134335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.134360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.134440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.134465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.134600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.158 [2024-05-16 20:23:36.134628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.158 qpair failed and we were unable to recover it. 00:24:49.158 [2024-05-16 20:23:36.134713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.134739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.134817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.134842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.134962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.134989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.135945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.135971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.136946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.136971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.137965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.137990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.138945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.138970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.139080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.159 [2024-05-16 20:23:36.139105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.159 qpair failed and we were unable to recover it. 00:24:49.159 [2024-05-16 20:23:36.139196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.139222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.139316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.139345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.139434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.139461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.139575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.139600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.139719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.139747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.139832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.139864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.139977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.140951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.140976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.141882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.141909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.142904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.142990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.143016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.143109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.160 [2024-05-16 20:23:36.143136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.160 qpair failed and we were unable to recover it. 00:24:49.160 [2024-05-16 20:23:36.143225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.143341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.143463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.143581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.143698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.143835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.143951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.143977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.144885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.144911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.145893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.145982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.146956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.146982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.147067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.147092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.147179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.147204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.147289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.147314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.147427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.147452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.147559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.161 [2024-05-16 20:23:36.147585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.161 qpair failed and we were unable to recover it. 00:24:49.161 [2024-05-16 20:23:36.147700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.147725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.147843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.147879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.147959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.147988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.148950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.148975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.149882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.149912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.150958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.150983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.151927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.151953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.152071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.162 [2024-05-16 20:23:36.152097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.162 qpair failed and we were unable to recover it. 00:24:49.162 [2024-05-16 20:23:36.152192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.152305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.152435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.152554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.152669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.152777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.152897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.152924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.153912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.153999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.154965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.154990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.155075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.155100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.155212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.155238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.155359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.155384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.155487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.155513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.155602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.155627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.163 qpair failed and we were unable to recover it. 00:24:49.163 [2024-05-16 20:23:36.155707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.163 [2024-05-16 20:23:36.155734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.155820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.155846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.155936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.155963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.156970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.156996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.157904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.157934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.164 [2024-05-16 20:23:36.158731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.164 [2024-05-16 20:23:36.158756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.164 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.158828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.158858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.158950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.158975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.159912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.159938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.160881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.160907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.165 [2024-05-16 20:23:36.161843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.165 qpair failed and we were unable to recover it. 00:24:49.165 [2024-05-16 20:23:36.161946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.161972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.162887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.162979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.163894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.163921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.164004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.164029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.164130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.164156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.164238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.164266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.164376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.166 [2024-05-16 20:23:36.164401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.166 qpair failed and we were unable to recover it. 00:24:49.166 [2024-05-16 20:23:36.164493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.164519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.164628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.164652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.164764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.164789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.164884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.164910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.164996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.165865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.165897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.166878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.166907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.167029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.167139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.167255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.167363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.167514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.167 [2024-05-16 20:23:36.167627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.167 qpair failed and we were unable to recover it. 00:24:49.167 [2024-05-16 20:23:36.167709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.167734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.167821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.167846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.167997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.168948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.168974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.169949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.169975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.168 [2024-05-16 20:23:36.170848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.168 qpair failed and we were unable to recover it. 00:24:49.168 [2024-05-16 20:23:36.170945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.170972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.171933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.171958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.172962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.172990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.173927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.173952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.174046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.174071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.169 [2024-05-16 20:23:36.174187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.169 [2024-05-16 20:23:36.174215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.169 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.174335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.174362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.174449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.174482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.174579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.174608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.174704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.174730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.174814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.174839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.174958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.174985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.175936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.175970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.176947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.176973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.170 [2024-05-16 20:23:36.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.170 [2024-05-16 20:23:36.177983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.170 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.178863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.178902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.179928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.179955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.180950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.180976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.181086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.181111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.181199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.181224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.181337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.181362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.171 [2024-05-16 20:23:36.181458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.171 [2024-05-16 20:23:36.181486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.171 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.181603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.181631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.181723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.181749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.181859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.181886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.181982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.182951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.182978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.183964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.183990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.184101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.184202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.184318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.184437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.184554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.172 [2024-05-16 20:23:36.184674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.172 qpair failed and we were unable to recover it. 00:24:49.172 [2024-05-16 20:23:36.184789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.184814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.184939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.184965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.185906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.185993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.186862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.186998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.187893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.187981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.188007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.188096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.188121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.173 [2024-05-16 20:23:36.188238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.173 [2024-05-16 20:23:36.188263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.173 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.188377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.188402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.188488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.188513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.188607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.188634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.188721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.188747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.188836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.188871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.188962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.188988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.189914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.189952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.190937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.190963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.191917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.174 [2024-05-16 20:23:36.191944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.174 qpair failed and we were unable to recover it. 00:24:49.174 [2024-05-16 20:23:36.192032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.192944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.192971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.193973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.193999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.194969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.194994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.195869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.195980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.196097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.196211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.196346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.196462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.196594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.175 [2024-05-16 20:23:36.196715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.175 [2024-05-16 20:23:36.196742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.175 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.196836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.196874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.196992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.197888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.197916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.198895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.198921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.199882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.199909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.200864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.200890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.201026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.201052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.201167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.176 [2024-05-16 20:23:36.201193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.176 qpair failed and we were unable to recover it. 00:24:49.176 [2024-05-16 20:23:36.201300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.201326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.201417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.201444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.201530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.201556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.201645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.201671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.201781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.201807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.201888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.201915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.202895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.202987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.203949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.203981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.177 [2024-05-16 20:23:36.204821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.177 [2024-05-16 20:23:36.204859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.177 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.204946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.204972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.205919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.205946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.206950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.206976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.207062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.207087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.178 qpair failed and we were unable to recover it. 00:24:49.178 [2024-05-16 20:23:36.207174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.178 [2024-05-16 20:23:36.207199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.207314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.207340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.207432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.207462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.207545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.207572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.207686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.207711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.207798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.207824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.207917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.207944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.208899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.208925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.209945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.209977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.210071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.210100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.210184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.210212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.210335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.210363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.210457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.210484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.210576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-05-16 20:23:36.210603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.179 qpair failed and we were unable to recover it. 00:24:49.179 [2024-05-16 20:23:36.210700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.210728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.210820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.210845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.210945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.210971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.211885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.211911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.180 qpair failed and we were unable to recover it. 00:24:49.180 [2024-05-16 20:23:36.212000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.180 [2024-05-16 20:23:36.212026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.212964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.212991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.213920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.213948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.214932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.214960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.215056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.215083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.215173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.215198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.215279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.215306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.474 [2024-05-16 20:23:36.215425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.474 [2024-05-16 20:23:36.215451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.474 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.215563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.215589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.215705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.215730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.215844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.215878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.215962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.215987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.216955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.216982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.217915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.217942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.218866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.218909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.219014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.219043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.219167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.219196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.219344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.219375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.219495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.219541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.219644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.219671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.475 qpair failed and we were unable to recover it. 00:24:49.475 [2024-05-16 20:23:36.219784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.475 [2024-05-16 20:23:36.219810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.219926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.219954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.220897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.220979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.221898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.221990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.222015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.222102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.222128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.222227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.222260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.223876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.223930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.224070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.224107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.224224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.224262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.224407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.224443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.224630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.224680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.224792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.224818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.224923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.224949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.476 qpair failed and we were unable to recover it. 00:24:49.476 [2024-05-16 20:23:36.225712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.476 [2024-05-16 20:23:36.225739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.225833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.225880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.225971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.225998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.226924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.226950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.227951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.227979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.228915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.228944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.477 qpair failed and we were unable to recover it. 00:24:49.477 [2024-05-16 20:23:36.229924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.477 [2024-05-16 20:23:36.229950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.230951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.230979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.231890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.231919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.232890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.232985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.478 [2024-05-16 20:23:36.233704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.478 qpair failed and we were unable to recover it. 00:24:49.478 [2024-05-16 20:23:36.233786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.233810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.233921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.233946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.234896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.234977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.235967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.235992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.236067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.236096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.236183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.236209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.236319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.236344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.236422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.236450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.236528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.479 [2024-05-16 20:23:36.236554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.479 qpair failed and we were unable to recover it. 00:24:49.479 [2024-05-16 20:23:36.236654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.236679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.236762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.236788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.236876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.236903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.236993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.237912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.237938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.238929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.238954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.239922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.239948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.240036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.240061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.240177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.240204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.240336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.240361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.240443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.240468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.480 [2024-05-16 20:23:36.240549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.480 [2024-05-16 20:23:36.240575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.480 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.240683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.240708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.240831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.240865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.240946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.240972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.241863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.241903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.242908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.242935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.243927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.243954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.244035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.244061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.244146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.244171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.244281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.244307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.244399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.244424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.481 [2024-05-16 20:23:36.244507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.481 [2024-05-16 20:23:36.244539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.481 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.244659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.244684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.244772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.244799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.244889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.244916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.245968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.245994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.246908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.246936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.247901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.247929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.248027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.248052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.248135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.248160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.248242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.248267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.482 qpair failed and we were unable to recover it. 00:24:49.482 [2024-05-16 20:23:36.248379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.482 [2024-05-16 20:23:36.248404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.248520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.248548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.248638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.248666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.248768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.248794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.248886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.248911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.248996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.249860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.249980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.250884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.250912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.251878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.251904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.252010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.252036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.252119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.252143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.252230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.252254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.252366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.483 [2024-05-16 20:23:36.252394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.483 qpair failed and we were unable to recover it. 00:24:49.483 [2024-05-16 20:23:36.252480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.252506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.252595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.252621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.252712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.252738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.252845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.252885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.252992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.253955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.253981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.254894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.254920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.484 qpair failed and we were unable to recover it. 00:24:49.484 [2024-05-16 20:23:36.255730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.484 [2024-05-16 20:23:36.255759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.255848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.255891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.255987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.256934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.256959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.257887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.257914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.258024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.258050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.258144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.258173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.258262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.258289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.258372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.258398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.258479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.258504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.485 [2024-05-16 20:23:36.258591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.485 [2024-05-16 20:23:36.258619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.485 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.258714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.258741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.258826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.258858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.258972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.258998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.259924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.259951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.260910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.260936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.261910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.261937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.262047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.262072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.262187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.262214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.262294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.262321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.262398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.262425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.262507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.262532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.486 qpair failed and we were unable to recover it. 00:24:49.486 [2024-05-16 20:23:36.262619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.486 [2024-05-16 20:23:36.262644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.262787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.262812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.262906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.262933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.263951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.263978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.264963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.264989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.265933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.265959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.487 [2024-05-16 20:23:36.266849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.487 [2024-05-16 20:23:36.266880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.487 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.266961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.266986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.267832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.267865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.268911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.268939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.269881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.269907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.488 [2024-05-16 20:23:36.270943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.488 [2024-05-16 20:23:36.270969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.488 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.271944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.271970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.272969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.272995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.273923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.273950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.274965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.274991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.275105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.275130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.275265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.489 [2024-05-16 20:23:36.275290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.489 qpair failed and we were unable to recover it. 00:24:49.489 [2024-05-16 20:23:36.275374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.275399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.275481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.275508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.275594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.275623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.275714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.275740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.275824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.275850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.275942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.275968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.276945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.276973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.277956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.277982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.278094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.278206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.278373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.278487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.278631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.490 [2024-05-16 20:23:36.278770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.490 qpair failed and we were unable to recover it. 00:24:49.490 [2024-05-16 20:23:36.278857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.278884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.278997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.279859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.279891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.280971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.280997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.281970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.281995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.282871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.282909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.283004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.283032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.491 qpair failed and we were unable to recover it. 00:24:49.491 [2024-05-16 20:23:36.283166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.491 [2024-05-16 20:23:36.283192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.283273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.283300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.283388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.283414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.283516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.283542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.283678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.283703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.283784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.283810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.284905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.284932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.285829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.285983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.286874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.286904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.287041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.287067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.287153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.287180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.287284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.287310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.287393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.287418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.287578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.287616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.492 qpair failed and we were unable to recover it. 00:24:49.492 [2024-05-16 20:23:36.287733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.492 [2024-05-16 20:23:36.287760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.287903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.287930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.288865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.288895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.289960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.289987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.290867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.290893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.291866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.291892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.292001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.493 [2024-05-16 20:23:36.292026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.493 qpair failed and we were unable to recover it. 00:24:49.493 [2024-05-16 20:23:36.292175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.292284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.292389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.292523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.292630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.292777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.292906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.292932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1797bb0 (9): Bad file descriptor 00:24:49.494 [2024-05-16 20:23:36.293356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.293891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.293918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.294926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.294952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.295915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.295943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.296058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.296084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.296200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.296226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.296351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.494 [2024-05-16 20:23:36.296377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.494 qpair failed and we were unable to recover it. 00:24:49.494 [2024-05-16 20:23:36.296460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.296485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.296625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.296651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.296752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.296790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.296910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.296938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.297968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.297994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.298902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.298928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.299075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.299199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.299334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.299446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.299583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.495 [2024-05-16 20:23:36.299689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.495 qpair failed and we were unable to recover it. 00:24:49.495 [2024-05-16 20:23:36.299802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.299828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.299921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.299948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.300908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.300992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.301876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.301903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.302864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.302890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.303950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.303976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.304117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.304230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.304369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.304505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.304643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.496 [2024-05-16 20:23:36.304762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.496 qpair failed and we were unable to recover it. 00:24:49.496 [2024-05-16 20:23:36.304850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.304882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.304996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.305946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.305972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.306974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.306999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.307960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.307985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.308102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.308128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.308205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.308230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.308306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.308331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.308444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.308469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.308546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.497 [2024-05-16 20:23:36.308571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.497 qpair failed and we were unable to recover it. 00:24:49.497 [2024-05-16 20:23:36.308657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.308681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.308761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.308786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.308875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.308904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.308993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.309899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.309925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.310962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.310987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.311870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.311897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.312014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.312039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.312177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.312202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.312305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.312330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.312442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.312467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.312579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.312604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.498 qpair failed and we were unable to recover it. 00:24:49.498 [2024-05-16 20:23:36.312713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.498 [2024-05-16 20:23:36.312738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.312844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.312878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.312964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.312993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.313889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.313974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.314906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.314986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.315954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.315980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.316091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.316116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.316196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.316221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.316338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.316371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.316482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.316507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.316599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.499 [2024-05-16 20:23:36.316626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.499 qpair failed and we were unable to recover it. 00:24:49.499 [2024-05-16 20:23:36.316719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.316745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.316864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.316892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.317888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.317998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.318896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.318922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.319910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.319937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.500 [2024-05-16 20:23:36.320871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.500 qpair failed and we were unable to recover it. 00:24:49.500 [2024-05-16 20:23:36.320958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.320984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.321911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.321937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.322829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.322860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.323959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.323984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.324067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.324092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.324209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.324233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.501 qpair failed and we were unable to recover it. 00:24:49.501 [2024-05-16 20:23:36.324345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.501 [2024-05-16 20:23:36.324370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.324452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.324477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.324591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.324616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.324728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.324753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.324834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.324874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.324990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.325894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.325922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.326931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.326970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.327109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.327259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.327424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.327604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.327767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.327876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.327977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.502 [2024-05-16 20:23:36.328961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.502 [2024-05-16 20:23:36.328986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.502 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.329898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.329924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.330905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.330993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.331864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.331904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.332947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.332972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.333059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.333085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.503 qpair failed and we were unable to recover it. 00:24:49.503 [2024-05-16 20:23:36.333175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.503 [2024-05-16 20:23:36.333201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.333316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.333343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.333459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.333490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.333577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.333604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.333691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.333716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.333820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.333845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.333938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.333964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.334911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.334937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.335928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.335957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.336952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.336988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.337888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.337915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.504 [2024-05-16 20:23:36.338006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.504 [2024-05-16 20:23:36.338033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.504 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.338908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.338997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.339886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.339993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.340861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.340888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.341839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.341978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.342005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.342119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.342144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.342265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.342291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.342378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.342404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.505 [2024-05-16 20:23:36.342500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.505 [2024-05-16 20:23:36.342525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.505 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.342639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.342664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.342747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.342772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.342861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.342888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.343913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.343952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.344922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.344948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.345901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.345928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.346892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.346918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.347029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.506 [2024-05-16 20:23:36.347055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.506 qpair failed and we were unable to recover it. 00:24:49.506 [2024-05-16 20:23:36.347169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.347315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.347430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.347551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.347666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.347773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.347891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.347918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.348911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.348938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.349974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.349999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.350844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.350889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.351929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.351957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.507 [2024-05-16 20:23:36.352044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.507 [2024-05-16 20:23:36.352069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.507 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.352209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.352373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.352483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.352618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.352755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.352903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.352992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.353951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.353976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.354956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.354982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.355879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.355988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.356941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.356969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.508 [2024-05-16 20:23:36.357081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.508 [2024-05-16 20:23:36.357106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.508 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.357966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.357992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.358951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.358977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.359900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.359982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.360894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.360981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.509 qpair failed and we were unable to recover it. 00:24:49.509 [2024-05-16 20:23:36.361942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.509 [2024-05-16 20:23:36.361967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.362868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.362979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.363936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.363963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.364960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.364988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.365902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.365990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.366015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.366105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.366130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.366218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.510 [2024-05-16 20:23:36.366243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.510 qpair failed and we were unable to recover it. 00:24:49.510 [2024-05-16 20:23:36.366328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.366353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.366443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.366468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.366555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.366581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.366668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.366695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.366780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.366806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.366901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.366930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.367897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.367924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.368908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.368934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.369958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.369983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.370943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.370970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.371092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.371118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.371202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.371227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.371318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.511 [2024-05-16 20:23:36.371343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.511 qpair failed and we were unable to recover it. 00:24:49.511 [2024-05-16 20:23:36.371427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.371452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.371538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.371564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.371649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.371676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.371759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.371786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.371870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.371896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.371986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.372873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.372989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.373950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.373976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.374893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.374920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.375885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.375911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.512 qpair failed and we were unable to recover it. 00:24:49.512 [2024-05-16 20:23:36.376725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.512 [2024-05-16 20:23:36.376751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.376844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.376882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.376997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.377936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.377962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.378959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.378984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.379933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.379959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.380895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.380982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.381941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.381971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.513 [2024-05-16 20:23:36.382061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.513 [2024-05-16 20:23:36.382086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.513 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.382957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.382982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.383942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.383968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.384889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.384974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.385959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.385984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.386064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.386089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.386200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.386225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.386304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.514 [2024-05-16 20:23:36.386329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.514 qpair failed and we were unable to recover it. 00:24:49.514 [2024-05-16 20:23:36.386416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.386441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.386521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.386546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.386639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.386664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.386744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.386772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.386870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.386898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.386996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.387972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.387998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.388943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.388969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.389912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.389994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.390946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.390989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.391118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.391230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.391394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.391520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.391622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.515 [2024-05-16 20:23:36.391741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.515 qpair failed and we were unable to recover it. 00:24:49.515 [2024-05-16 20:23:36.391831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.391863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.391981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.392964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.392989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.393968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.393995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.394939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.394965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.395921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.395947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.396912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.396999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.397024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.397105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.397131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.397210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.397236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.397315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.397340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.516 [2024-05-16 20:23:36.397430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.516 [2024-05-16 20:23:36.397459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.516 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.397543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.397568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.397655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.397680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.397795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.397820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.397935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.397960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.398892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.398917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.399904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.399937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.400904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.400988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.401930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.401955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.402942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.402969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.403054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.403079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.403192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.517 [2024-05-16 20:23:36.403223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.517 qpair failed and we were unable to recover it. 00:24:49.517 [2024-05-16 20:23:36.403308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.403421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.403524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.403633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.403740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.403856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.403974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.403999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.404955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.404980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.405893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.405979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.406949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.406974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.407966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.407997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.408074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.408100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.408186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.408211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.408298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.408323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.408403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.408428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.518 [2024-05-16 20:23:36.408513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.518 [2024-05-16 20:23:36.408538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.518 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.408653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.408680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.408762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.408788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.408898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.408926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.409904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.409985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.410901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.410982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.411964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.411990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.412887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.412913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.413910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.413936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.414067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.414092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.414190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.414215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.414294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.414321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.414407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.414437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.414561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.414588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.519 qpair failed and we were unable to recover it. 00:24:49.519 [2024-05-16 20:23:36.414694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.519 [2024-05-16 20:23:36.414719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.414836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.414870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.414986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.415895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.415923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.416898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.416986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.417940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.417968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.418972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.418997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.419927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.419959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.420072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.420098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.420187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.420214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.420324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.420350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.420469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.420498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.420609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.420635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.520 [2024-05-16 20:23:36.420725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.520 [2024-05-16 20:23:36.420763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.520 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.420872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.420900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.420986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.421871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.421898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.422959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.422986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.423961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.423987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.424904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.424931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.425908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.425934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.426018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.521 [2024-05-16 20:23:36.426048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.521 qpair failed and we were unable to recover it. 00:24:49.521 [2024-05-16 20:23:36.426129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.426908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.426990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.427859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.427887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.428908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.428935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.522 [2024-05-16 20:23:36.429845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.522 qpair failed and we were unable to recover it. 00:24:49.522 [2024-05-16 20:23:36.429979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.430886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.430983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.431965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.431991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.432906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.432934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.433836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.433979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.523 [2024-05-16 20:23:36.434723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.523 qpair failed and we were unable to recover it. 00:24:49.523 [2024-05-16 20:23:36.434802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.434828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.434935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.434962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.435850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.435992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.436887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.436915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.437909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.437937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.438924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.438950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.439039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.439065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.439175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.439205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.439320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.439346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.439421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.439447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.439530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.524 [2024-05-16 20:23:36.439563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.524 qpair failed and we were unable to recover it. 00:24:49.524 [2024-05-16 20:23:36.439662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.439706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.439801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.439827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.439940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.439968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.440848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.440884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.441872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.441900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.442931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.442959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.443899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.443984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.444009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.444086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.444111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.444223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.444249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.444341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.444366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.444475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.444500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.444613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.525 [2024-05-16 20:23:36.444638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.525 qpair failed and we were unable to recover it. 00:24:49.525 [2024-05-16 20:23:36.444734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.444765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.444859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.444887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.444965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.444991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.445955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.445981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.446885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.446911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.447948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.447974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.448085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.448226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.448333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.448495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.448636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.526 [2024-05-16 20:23:36.448748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.526 qpair failed and we were unable to recover it. 00:24:49.526 [2024-05-16 20:23:36.448858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.448885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.448991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.449954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.449980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.450926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.450951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.451925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.451964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.452955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.452981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.453090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.453294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.453468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.453607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.453751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.527 [2024-05-16 20:23:36.453888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.527 qpair failed and we were unable to recover it. 00:24:49.527 [2024-05-16 20:23:36.453969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.453995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.454914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.454939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.455838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.455974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.456914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.456941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.457892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.457920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.458038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.458063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.458173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.458198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.458313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.458338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.458480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.528 [2024-05-16 20:23:36.458506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.528 qpair failed and we were unable to recover it. 00:24:49.528 [2024-05-16 20:23:36.458626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.458654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.458750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.458776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.458867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.458899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.459928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.459955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.460942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.460969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.461890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.461915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.462939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.462966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.463070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.463096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.463185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.463212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.463345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.463370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.463485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.463510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.463596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.529 [2024-05-16 20:23:36.463620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.529 qpair failed and we were unable to recover it. 00:24:49.529 [2024-05-16 20:23:36.463710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.463736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.463887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.463912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.464893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.464919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.465884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.465909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.466931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.466956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.467880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.467985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.468104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.468239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.468378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.468487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.468621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.530 [2024-05-16 20:23:36.468757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.530 [2024-05-16 20:23:36.468782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.530 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.468895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.468921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.469860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.469888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.470840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.470872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.471961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.471988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.472968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.472995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.473085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.473111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.473219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.531 [2024-05-16 20:23:36.473246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.531 qpair failed and we were unable to recover it. 00:24:49.531 [2024-05-16 20:23:36.473359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.473385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.473463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.473490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.473602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.473628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.473760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.473786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.473873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.473899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.473978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.474922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.474949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.475934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.475960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.476936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.476979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.477916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.477945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.478039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.478067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.478150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.532 [2024-05-16 20:23:36.478177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.532 qpair failed and we were unable to recover it. 00:24:49.532 [2024-05-16 20:23:36.478266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.478293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.478386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.478413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.478538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.478567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.478698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.478723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.478829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.478861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.478990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.479161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.479305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.479471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.479653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.479776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.479935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.479979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.480873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.480920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.481864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.481985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.482884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.482999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.483024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.483110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.483151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.483244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.533 [2024-05-16 20:23:36.483271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.533 qpair failed and we were unable to recover it. 00:24:49.533 [2024-05-16 20:23:36.483368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.483396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.483521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.483549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.483664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.483693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.483796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.483822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.483926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.483954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.484926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.484952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.485950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.485976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.486943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.486971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.487904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.487930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.534 [2024-05-16 20:23:36.488903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.534 qpair failed and we were unable to recover it. 00:24:49.534 [2024-05-16 20:23:36.488985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.489122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.489271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.489400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.489514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.489672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.489845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.489878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.490972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.490997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.491919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.491948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.492961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.492988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.493066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.493092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.493191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.493217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.493309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.493335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.493424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.535 [2024-05-16 20:23:36.493449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.535 qpair failed and we were unable to recover it. 00:24:49.535 [2024-05-16 20:23:36.493527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.493553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.493667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.493693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.493803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.493829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.493946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.493973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.494949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.494978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.495891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.495929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.496968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.496997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.497863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.497889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.498009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.498034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.498120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.498146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.536 [2024-05-16 20:23:36.498224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-05-16 20:23:36.498249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.536 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.498339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.498378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.498464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.498489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.498625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.498663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.498783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.498809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.498923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.498953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.499902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.499929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.500934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.500961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.501123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.501149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.501297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.501337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.501459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.501487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.501597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.501624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.501711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.501737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.501843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.501876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.502868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.502894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.503033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.503059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.503181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.503220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.503421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.503466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.537 [2024-05-16 20:23:36.503574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-05-16 20:23:36.503617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.537 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.503706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.503732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.503848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.503884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.503995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.504927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.504952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.505968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.505993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.506921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.506952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.507879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.507992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.508109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.508222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.508337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.508468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.508617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.538 qpair failed and we were unable to recover it. 00:24:49.538 [2024-05-16 20:23:36.508728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.538 [2024-05-16 20:23:36.508756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.508887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.508926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.509904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.509988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.510956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.510983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.511890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.511917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.512863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.512978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.513004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.513081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.513107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.513217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.513243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.513349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.513375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.513483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.513510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.539 [2024-05-16 20:23:36.513594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.539 [2024-05-16 20:23:36.513620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.539 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.513762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.513787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.513900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.513927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.514824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.514851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.515929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.515956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.516954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.516979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.540 [2024-05-16 20:23:36.517940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.540 [2024-05-16 20:23:36.517967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.540 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.518960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.518985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.519922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.519949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.520904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.520935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.521889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.521916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.522032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.522058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.522143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.522168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.522254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.522279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.522362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.541 [2024-05-16 20:23:36.522387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.541 qpair failed and we were unable to recover it. 00:24:49.541 [2024-05-16 20:23:36.522469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.522494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.522575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.522600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.522708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.522733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.522810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.522835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.522950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.522975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.523879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.523974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.524910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.524984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.525927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.525952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.526847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.526984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.527013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.527091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.527117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.527250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.527276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.527363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.527389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.542 [2024-05-16 20:23:36.527478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.542 [2024-05-16 20:23:36.527504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.542 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.527613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.527638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.527749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.527780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.527880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.527907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.528890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.528995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.529122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.529274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.529445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.529606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.529728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.529912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.529943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.530872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.530898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.531893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.531918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.532025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.532164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.532307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.532445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.532549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.543 [2024-05-16 20:23:36.532694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.543 qpair failed and we were unable to recover it. 00:24:49.543 [2024-05-16 20:23:36.532841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.532878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.532960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.532986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.533070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.533095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.533231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.533274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.533375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.533420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.533554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.533598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.533739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.533765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.533861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.533888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.534886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.534992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.535108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.535216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.535322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.535500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.535672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.535828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.535874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.536842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.536877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.537937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.537963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.538070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.544 [2024-05-16 20:23:36.538095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.544 qpair failed and we were unable to recover it. 00:24:49.544 [2024-05-16 20:23:36.538192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.538220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.538322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.538363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.538448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.538475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.538564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.538591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.538728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.538814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.538840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.538985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.539879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.539916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.540918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.540946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.541821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.541848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.542909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.542936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.545 qpair failed and we were unable to recover it. 00:24:49.545 [2024-05-16 20:23:36.543040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.545 [2024-05-16 20:23:36.543084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.543214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.543259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.543390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.543420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.543523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.543550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.543634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.543661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.543746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.543772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.543909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.543935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.544862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.544887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.545920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.545946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.546850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.546982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.547967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.547992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.546 [2024-05-16 20:23:36.548941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.546 [2024-05-16 20:23:36.548967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.546 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.549897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.549923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.550972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.550997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.551921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.551960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.552061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.552089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.552250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.552294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.552419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.552448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.552620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.552663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.552776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.552801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.552932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.552977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.553107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.553136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.553316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.553363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.553519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.553547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.553696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.553724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.553812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.553867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.553973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.554124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.554263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.554451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.554595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.554725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.554865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.554893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.555055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.555099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.555228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.555256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.547 qpair failed and we were unable to recover it. 00:24:49.547 [2024-05-16 20:23:36.555404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.547 [2024-05-16 20:23:36.555433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.555564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.555590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.555724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.555750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.555834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.555866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.555965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.555991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.556952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.556978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.557937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.557979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.558950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.558979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.559090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.559134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.559299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.559342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.559452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.559481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.559619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.559645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.559732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.559757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.559865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.559892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.560857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.560884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.561009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.561036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.561152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.561180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.561300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.561328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.561438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.548 [2024-05-16 20:23:36.561470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.548 qpair failed and we were unable to recover it. 00:24:49.548 [2024-05-16 20:23:36.561565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.561595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.561710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.561751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.561847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.561879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.561985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.562885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.562920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.563067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.563091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.563249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.563291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.563451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.563492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.563640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.563668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.563796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.563822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.563912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.563939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.564886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.564912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.565858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.565983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.566126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.566259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.566408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.566624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.566804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.566942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.566967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.567075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.567101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.567236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.567263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.567418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.549 [2024-05-16 20:23:36.567446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.549 qpair failed and we were unable to recover it. 00:24:49.549 [2024-05-16 20:23:36.567574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.567602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.567754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.567793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.567894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.567926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.568872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.568900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.569927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.569955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.570905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.570931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.571847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.571879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.572843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.572987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.573013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.573122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.573147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.573256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.573281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.573396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.573422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.573505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.573530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.573619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.550 [2024-05-16 20:23:36.573644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.550 qpair failed and we were unable to recover it. 00:24:49.550 [2024-05-16 20:23:36.573761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.573787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.573869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.573895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.574820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.574983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.575971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.575998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.576131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.576322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.576443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.576585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.576727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.576877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.576990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.577147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.577300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.577437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.577601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.577735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.577938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.577982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.578962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.578987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.579103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.579263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.579392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.579513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.579653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.579835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.551 [2024-05-16 20:23:36.579988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.551 [2024-05-16 20:23:36.580015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.551 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.580179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.580220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.580314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.580342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.580449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.580477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.580622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.580650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.580779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.580818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.580966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.581003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.581161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.581200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.581349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.581390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.581532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.581571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.581728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.581761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.581866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.581895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.581996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.582944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.582982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.583091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.583125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.583266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.583300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.583397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.583435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.583578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.583612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.583770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.583805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.583988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.584900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.584927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.585921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.585948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.586029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.586055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.586170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.586196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.586285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.586310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.586401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.552 [2024-05-16 20:23:36.586427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.552 qpair failed and we were unable to recover it. 00:24:49.552 [2024-05-16 20:23:36.586544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.586569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.586707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.586732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.586836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.586870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.586994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.587843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.587964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.588900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.588994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.589907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.589986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.590127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.590269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.590385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.590535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.590719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.553 [2024-05-16 20:23:36.590875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.553 [2024-05-16 20:23:36.590914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.553 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.591952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.591979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.592889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.592964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.593008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.593090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.840 [2024-05-16 20:23:36.593131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.840 qpair failed and we were unable to recover it. 00:24:49.840 [2024-05-16 20:23:36.593262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.593296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.593387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.593416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.593517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.593547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.593655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.593698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.593827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.593981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.594960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.594986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.595118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.595249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.595370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.595518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.595663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.595840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.595982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.596110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.596282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.596448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.596607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.596776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.596937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.596964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.597075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.597285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.597471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.597625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.597789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.597893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.597974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.598124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.598245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.598389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.598562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.598697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.598835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.598865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.599916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.599943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.600059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.600084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.600207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.600235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.600355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.600383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.600479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.600507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.600610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.841 [2024-05-16 20:23:36.600637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.841 qpair failed and we were unable to recover it. 00:24:49.841 [2024-05-16 20:23:36.600733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.600758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.600841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.600874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.600962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.600988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.601921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.601948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.602063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.602089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.602172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.602215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.602317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.602344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.602477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.602506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.602683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.602730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.602874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.602915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.603067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.603182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.603392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.603611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.603760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.603901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.603982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.604912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.604997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.605943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.605970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.606079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.606105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.606215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.606259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.606429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.606457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.606577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.606605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.606710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.606745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.606909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.606937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.607919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.607946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.608064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.608090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.608188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.608217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.608337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.608366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.842 qpair failed and we were unable to recover it. 00:24:49.842 [2024-05-16 20:23:36.608453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.842 [2024-05-16 20:23:36.608482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.608604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.608634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.608767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.608794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.608912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.608942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.609063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.609089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.609200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.609226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.609363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.609392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.609593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.609622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.609705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.609733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.609866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.609914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.610019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.610048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.610207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.610235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.610370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.610418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.610584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.610617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.610747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.610775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.610871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.610916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.611075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.611228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.611388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.611506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.611657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.611818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.611975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.612139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.612295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.612503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.612649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.612785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.612895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.612920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.613867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.613894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.614899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.614984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.615014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.615093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.615119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.615268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.615318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.615435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.615486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.615601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.843 [2024-05-16 20:23:36.615630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.843 qpair failed and we were unable to recover it. 00:24:49.843 [2024-05-16 20:23:36.615733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.615763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.615878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.615905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.616928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.616953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.617876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.617940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.618930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.618957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.619933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.619960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.620863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.620889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.621015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.621043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.621161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.621188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.621272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.621299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.621411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.844 [2024-05-16 20:23:36.621458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.844 qpair failed and we were unable to recover it. 00:24:49.844 [2024-05-16 20:23:36.621611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.621657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.621768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.621793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.621875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.621902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.621986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.622891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.622977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.623878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.623903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.624888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.624916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.625865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.625976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.626888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.626913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.627862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.627889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.845 [2024-05-16 20:23:36.628004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.845 [2024-05-16 20:23:36.628028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.845 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.628189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.628309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.628422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.628595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.628735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.628875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.628987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.629973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.629999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.630929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.630974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.631085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.631131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.631273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.631319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.631428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.631465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.631605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.631631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.631717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.631742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.631896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.631923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.632885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.632911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.633843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.633882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.634896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.634927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.635010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.846 [2024-05-16 20:23:36.635034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.846 qpair failed and we were unable to recover it. 00:24:49.846 [2024-05-16 20:23:36.635117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.635317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.635439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.635562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.635684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.635802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.635960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.635986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.636885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.636912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.637968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.637994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.638900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.638984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.639940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.639984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.640076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.640103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.640220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.640250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.640335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.640364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.640453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.640479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.847 [2024-05-16 20:23:36.640559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.847 [2024-05-16 20:23:36.640583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.847 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.640663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.640688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.640797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.640822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.640911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.640936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.641070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.641216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.641342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.641492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.641693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.641833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.641978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.642897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.642990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.643134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.643261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.643421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.643560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.643682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.643844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.643910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.644923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.644950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.645851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.645902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.646939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.646964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.647049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.647074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.647174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.647200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.647293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.647320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.647413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.848 [2024-05-16 20:23:36.647442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.848 qpair failed and we were unable to recover it. 00:24:49.848 [2024-05-16 20:23:36.647562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.647590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.647723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.647750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.647834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.647865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.647951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.647977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.648922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.648950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.649896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.649976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.650885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.650911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.651876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.651903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.652914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.652940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.653964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.653990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.654092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.654121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.849 [2024-05-16 20:23:36.654224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.849 [2024-05-16 20:23:36.654253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.849 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.654384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.654411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.654525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.654554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.654644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.654672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.654768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.654796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.654907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.654933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.655933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.655979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.656124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.656277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.656404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.656582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.656737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.656900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.656995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.657969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.657995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.658971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.658996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.659934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.659959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.660118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.660253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.660366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.660539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.660696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.660848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.660979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.661006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.661093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.661119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.850 qpair failed and we were unable to recover it. 00:24:49.850 [2024-05-16 20:23:36.661242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.850 [2024-05-16 20:23:36.661271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.661416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.661444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.661566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.661591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.661685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.661712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.661800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.661829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.661951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.662934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.662962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.663909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.663936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.664903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.664991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.665934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.665962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.666136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.666276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.666402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.666559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.666735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.666876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.666999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.667024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.851 [2024-05-16 20:23:36.667133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.851 [2024-05-16 20:23:36.667160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.851 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.667248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.667276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.667384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.667412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.667510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.667543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.667636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.667677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.667787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.667814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.667933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.667961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.668877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.668990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.669906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.669990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.670108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.670282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.670398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.670555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.670751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.670888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.670917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.671827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.671860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.672001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.672029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.672114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.672142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.672252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.672277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.672373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.852 [2024-05-16 20:23:36.672401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.852 qpair failed and we were unable to recover it. 00:24:49.852 [2024-05-16 20:23:36.672483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.672511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.672658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.672702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.672836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.672873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.672988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.673941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.673966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.674896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.674922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.675874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.675987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.676930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.676969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.677964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.677993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.678085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.678114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.678231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.678275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.678446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-05-16 20:23:36.678475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.853 qpair failed and we were unable to recover it. 00:24:49.853 [2024-05-16 20:23:36.678609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.678657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.678750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.678780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.678893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.678920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.679903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.679932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.680920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.680960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.681944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.681969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.682097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.682241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.682397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.682594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.682750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.682885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.682989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.683930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.683956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.684062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.684087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.684184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.684213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.854 [2024-05-16 20:23:36.684303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.854 [2024-05-16 20:23:36.684336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.854 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.684458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.684486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.684582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.684610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.684704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.684735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.684828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.684863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.684967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.684992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.685919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.685944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.686907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.686934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.687958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.687985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.855 [2024-05-16 20:23:36.688883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.855 [2024-05-16 20:23:36.688912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.855 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.689884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.689913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.690050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.690094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.690225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.690268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.690394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.690438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.690544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.690573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.690675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.690718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.690813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.690843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.691923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.691953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.692930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.692957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.693908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.693937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.856 [2024-05-16 20:23:36.694897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.856 [2024-05-16 20:23:36.694939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.856 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.695867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.695983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.696865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.696892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.697908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.697992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.698885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.698982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.699951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.699977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.700076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.700106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.700210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.700245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.700365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.700394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.700577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.700625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.700729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.700767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.700913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.700944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.857 [2024-05-16 20:23:36.701057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.857 [2024-05-16 20:23:36.701085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.857 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.701206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.701355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.701498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.701631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.701765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.701886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.701978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.702117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.702265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.702425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.702593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.702765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.702880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.702907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.703914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.703941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.704964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.704993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.705957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.705983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.706962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.706991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.707080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.707108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.858 [2024-05-16 20:23:36.707197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.858 [2024-05-16 20:23:36.707226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.858 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.707347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.707376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.707523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.707553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.707661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.707686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.707776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.707804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.707919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.707950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.708882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.708982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.709906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.709934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.710838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.710873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.711842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.711880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.712018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.712064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.712200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.712245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.712374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.712402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.712506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.859 [2024-05-16 20:23:36.712531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.859 qpair failed and we were unable to recover it. 00:24:49.859 [2024-05-16 20:23:36.712655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.712682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.712770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.712795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.712920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.712958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.713882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.713911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.714048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.714224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.714406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.714537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.714684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.714839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.714975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.715954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.715980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.716920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.716947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.717869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.717986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.718890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.718919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.860 [2024-05-16 20:23:36.719052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.860 [2024-05-16 20:23:36.719080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.860 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.719277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.719303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.719381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.719407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.719520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.719546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.719678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.719704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.719798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.719836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.719997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.720813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.720964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.721948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.721974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.722966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.722992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.723181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.723325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.723470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.723590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.723697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.723833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.723975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.724939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.724965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.725071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.725097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.725177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.725206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.725306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.725335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.725429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.725457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.725557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.725594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.861 qpair failed and we were unable to recover it. 00:24:49.861 [2024-05-16 20:23:36.725709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.861 [2024-05-16 20:23:36.725735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.725848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.725898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.725986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.726955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.726982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.727968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.727994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.728910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.728994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.729965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.729994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.730152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.730309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.730483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.730618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.730766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.862 [2024-05-16 20:23:36.730895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.862 qpair failed and we were unable to recover it. 00:24:49.862 [2024-05-16 20:23:36.730981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.731132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.731259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.731392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.731540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.731675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.731841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.731904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.732081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.732239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.732364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.732491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.732676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.732787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.732984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.733107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.733261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.733396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.733595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.733742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.733897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.733929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.734899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.734928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.735882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.735988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.736943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.736971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.737897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.737923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.738913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.738942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.739916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.739942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.863 qpair failed and we were unable to recover it. 00:24:49.863 [2024-05-16 20:23:36.740746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.863 [2024-05-16 20:23:36.740774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.740851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.740883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.740984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.741965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.741991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.742901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.742931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.743901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.743927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.744968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.744997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.745955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.745982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.746903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.746987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.747876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.747911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.748886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.748975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.749944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.749970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.750077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.750104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.750248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.750296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.750406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.750433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.750529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.750556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.864 qpair failed and we were unable to recover it. 00:24:49.864 [2024-05-16 20:23:36.750655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.864 [2024-05-16 20:23:36.750680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.750764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.750790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.750870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.750899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.751892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.751918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.752936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.752962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.753388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.753417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.753541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.753569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.753682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.753708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.753792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.753818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.753937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.753969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.754847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.754975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.755967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.755995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.756134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.756251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.756433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.756564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.756730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.756875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.756990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.757113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.757299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.757418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.757590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.757722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.757901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.757941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.758964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.758989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.759823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.759985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.760030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.760137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.760165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.760274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.760300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.760419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.760447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.865 qpair failed and we were unable to recover it. 00:24:49.865 [2024-05-16 20:23:36.760596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.865 [2024-05-16 20:23:36.760620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.760703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.760729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.760816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.760840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.760931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.760956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.761970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.761996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.762161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.762317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.762442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.762608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.762748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.762869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.762998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.763962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.763987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.764961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.764989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.765963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.765991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.766912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.766940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.767849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.767880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.768880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.768935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.769901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.769986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.770013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.770146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.770177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.770281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.770311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.770428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.866 [2024-05-16 20:23:36.770462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.866 qpair failed and we were unable to recover it. 00:24:49.866 [2024-05-16 20:23:36.770610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.770638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.770756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.770786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.770925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.770950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.771965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.771991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.772874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.772903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.773954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.773993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.774922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.774946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.775963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.775988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.776873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.776918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.777006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.777031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.777137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.777162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.777289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.777316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.777416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.777442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.867 qpair failed and we were unable to recover it. 00:24:49.867 [2024-05-16 20:23:36.777581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.867 [2024-05-16 20:23:36.777609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.777744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.777773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.777891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.777923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.778890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.778925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.779033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.779058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.779210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.779239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.779363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.779392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.779542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.779573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.779748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.779787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.779883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.779911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.780931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.780956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.781915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.781942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.782937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.782963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.783058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.783086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.783219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.783247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.783344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.783371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.783525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.868 [2024-05-16 20:23:36.783558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.868 qpair failed and we were unable to recover it. 00:24:49.868 [2024-05-16 20:23:36.783690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.783716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.783813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.783838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.783969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.783999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.784935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.784961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.785886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.785914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.786042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.786213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.786362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.786510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.786688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.786862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.786977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.787130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.787282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.787423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.787602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.787745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.787863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.787889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.788965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.788994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.789894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.789921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.790935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.790961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.869 [2024-05-16 20:23:36.791070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.869 [2024-05-16 20:23:36.791095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.869 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.791233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.791259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.791398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.791442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.791655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.791708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.791809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.791837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.791977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.792112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.792226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.792400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.792550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.792703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.792888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.792915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.793965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.793991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.794965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.794991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.795103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.795146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.795288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.795335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.795467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.795516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.795616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.795644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.795746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.795773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.795887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.795916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.796031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.796057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.796198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.796227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.796359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.796408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.796511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.796539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.796687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.796715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.796845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.796909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.797914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.797940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.798025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.798050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.798194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.798222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.798341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.870 [2024-05-16 20:23:36.798369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.870 qpair failed and we were unable to recover it. 00:24:49.870 [2024-05-16 20:23:36.798479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.798507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.798654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.798686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.798802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.798842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.798984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.799105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.799305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.799452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.799603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.799790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.799955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.799984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.800063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.800089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.800167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.800193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.800306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.800332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.800488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.800538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.800652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.800677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.800817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.800846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.801972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.801998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.802933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.802973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.803107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.803146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.803290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.803341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.803545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.803604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.803751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.803779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.803922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.803949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.804078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.804104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.804258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.804287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.804431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.804480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.804616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.804645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.804796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.804822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.804940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.804967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.805102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.805228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.805351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.805537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.805696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.805869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.871 qpair failed and we were unable to recover it. 00:24:49.871 [2024-05-16 20:23:36.805969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.871 [2024-05-16 20:23:36.806007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.806130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.806274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.806448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.806630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.806767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.806883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.806991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.807887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.807924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.808970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.808995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.809084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.809110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.809278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.809306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.809431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.809459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.809613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.809644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.809750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.809777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.809897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.809924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.810067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.810201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.810344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.810534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.810714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.810834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.810985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.811949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.811976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.812064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.812089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.812191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.812218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.812332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.812361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.812495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.812538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.812691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.812722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.812830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.812878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.813002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.813029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.813126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.813154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.813292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.813320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.813442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.872 [2024-05-16 20:23:36.813467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.872 qpair failed and we were unable to recover it. 00:24:49.872 [2024-05-16 20:23:36.813560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.813588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.813720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.813758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.813865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.813893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.813993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.814862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.814906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.815931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.815957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.816037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.816063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.816197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.816225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.816350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.816379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.816506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.816535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.816683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.816712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.816837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.816873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.817965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.817991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.818953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.818979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.819088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.819114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.819256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.819281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.819396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.819422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.819534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.819561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.819669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.873 [2024-05-16 20:23:36.819694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.873 qpair failed and we were unable to recover it. 00:24:49.873 [2024-05-16 20:23:36.819861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.819933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.820072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.820233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.820396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.820624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.820781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.820892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.820985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.821845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.821995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.822154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.822264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.822408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.822538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.822663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.822805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.822847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.823869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.823894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.824917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.824944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.825050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.825079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.825222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.825264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.825342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.825371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.874 qpair failed and we were unable to recover it. 00:24:49.874 [2024-05-16 20:23:36.825514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.874 [2024-05-16 20:23:36.825565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.825691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.825723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.825816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.825844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.825956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.825981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.826888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.826978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.827909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.827991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.828845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.828991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.829147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.829321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.829518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.829652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.829789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.829912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.875 [2024-05-16 20:23:36.829939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.875 qpair failed and we were unable to recover it. 00:24:49.875 [2024-05-16 20:23:36.830093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.830225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.830341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.830454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.830620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.830758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.830869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.830895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.831070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.831252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.831412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.831583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.831713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.831887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.831974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.832135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.832303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.832506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.832649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.832797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.832937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.832963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.833866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.833910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.834948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.834979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.835111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.835241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.835458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.835578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.835719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.835837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.835978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.836898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.836990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.837015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.837129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.837154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.837231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.837257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.876 qpair failed and we were unable to recover it. 00:24:49.876 [2024-05-16 20:23:36.837368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.876 [2024-05-16 20:23:36.837393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.837532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.837557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.837642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.837666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.837801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.837826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.837953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.837979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.838105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.838256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.838372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.838602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.838745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.838894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.838998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.839878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.839989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.840912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.840999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.841115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.841245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.841406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.841557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.841687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.841873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.841917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.842874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.842914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.843933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.843959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.844060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.844088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.844204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.844232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.844376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.844430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.844607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.844658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.844756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.844786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.844896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.844934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.845964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.845991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.846080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.846125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.846254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.846283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.846432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.846480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.846651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.846706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.846848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.846882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.847005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.847032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.847132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.847168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.847319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.847368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.847550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.847600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.847688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.877 [2024-05-16 20:23:36.847729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.877 qpair failed and we were unable to recover it. 00:24:49.877 [2024-05-16 20:23:36.847826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.847873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.848866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.848979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.849109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.849225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.849374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.849525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.849665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.849829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.849863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.850926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.850952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.851952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.851977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.852121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.852269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.852412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.852560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.852731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.852850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.852994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.853104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.853211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.853366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.853576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.853719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.853840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.853894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.854934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.854961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.855890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.855915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.856029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.856054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.856157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.878 [2024-05-16 20:23:36.856182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.878 qpair failed and we were unable to recover it. 00:24:49.878 [2024-05-16 20:23:36.856286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.856311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.856415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.856443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.856547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.856572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.856687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.856728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.856844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.856874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.856985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.857929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.857956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.858882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.858924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.859887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.859914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.860897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.860923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.861871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.861895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.862956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.862982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.863899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.863928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.864086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.864225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.864365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.864533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.864664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.864838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.864978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.865004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.865092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.865118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.865206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.865231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.865344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.865369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.865454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.865478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.879 qpair failed and we were unable to recover it. 00:24:49.879 [2024-05-16 20:23:36.865556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.879 [2024-05-16 20:23:36.865581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.865656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.865681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.865783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.865807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.865895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.865920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.866885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.866913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.867891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.867997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.868924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.868951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.869913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.869940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.870891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.870916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.871887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.871977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.872950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.872977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.873910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.873993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.880 [2024-05-16 20:23:36.874018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.880 qpair failed and we were unable to recover it. 00:24:49.880 [2024-05-16 20:23:36.874101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.874884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.874911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.875934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.875961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.876874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.876985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.877948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.877973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.878882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.878909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.879878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.879989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.880941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.880967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.881069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.881094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.881204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.881229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.881319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.881343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.881 [2024-05-16 20:23:36.881431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.881 [2024-05-16 20:23:36.881456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.881 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.881534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.881559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.881668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.881693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.881767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.881791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.881923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.881961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.882972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.882997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.883140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.883265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.883386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.883532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.883678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.883872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.883983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.884878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.884922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.885936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.885975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.886902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.886929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.887966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.887997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.888131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.888304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.888533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.888648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.888770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.888907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.888997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.889864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.889983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.890130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.890261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.890383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.890537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.890677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.882 [2024-05-16 20:23:36.890785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.882 [2024-05-16 20:23:36.890812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.882 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.890948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.890986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.891080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.891126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.891225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.891254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.891358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.891387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.891508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.891556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.891743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.891789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.891907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.891934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.892938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.892970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.893872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.893920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.894890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.894979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.895871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.895985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.896100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.896291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.896435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.896577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.896746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.896871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.896900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.897919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.897999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.898025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.898130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.898158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.898303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.898331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.898447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.898497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.898656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.898684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.898824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.898864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.898975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.899003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.899097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.899123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.899209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.899235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.899347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.899377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.899498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-16 20:23:36.899527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.883 qpair failed and we were unable to recover it. 00:24:49.883 [2024-05-16 20:23:36.899649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.899690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.899800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.899827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.899948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.899987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.900149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.900311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.900460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.900609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.900754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.900881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.900995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.901883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.901977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.902952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.902978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.903955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.903980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.904970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.904996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.905083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.905128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.905279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.905308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.905456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.905484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.905642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.905672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.905782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.905811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.905946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.905985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.906074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.906118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.906235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.906284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.906431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.906478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.906588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.906640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.906738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.906768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.906900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.906927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.907844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.907905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.908965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.908991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.909111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.884 [2024-05-16 20:23:36.909139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.884 qpair failed and we were unable to recover it. 00:24:49.884 [2024-05-16 20:23:36.909250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.909301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.909412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.909462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.909628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.909681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.909797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.909823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.909911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.909939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.910905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.910996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.911894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.911984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.912924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.912951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.913860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.913983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.914945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.914971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.915973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.915999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.916095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.916123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.916270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.916320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.916440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.916491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.916617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.916646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.916752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.916779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.916894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.916922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.917941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.917966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.918059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.918084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.918218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.918245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.918361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.885 [2024-05-16 20:23:36.918388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.885 qpair failed and we were unable to recover it. 00:24:49.885 [2024-05-16 20:23:36.918484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.918515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.918642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.918676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.918795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.918823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.918943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.918971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.919914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.919940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.920897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.920924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.921865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.921893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.922921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.922950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.923915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.923953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.924969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.924995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.925905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.925950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.926070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.926098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.926223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.926252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.926370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.886 [2024-05-16 20:23:36.926398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.886 qpair failed and we were unable to recover it. 00:24:49.886 [2024-05-16 20:23:36.926498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.926525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.926637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.926678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.926765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.926789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.926904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.926933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.927879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.927919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.928892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.928987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.929912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.929937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.930843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.930993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.931902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.931996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.932872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.932978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.933865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.933996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.934954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.934980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.935107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.935138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.935236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.935262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.935395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.935441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.935535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.935561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.935649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.935678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.887 qpair failed and we were unable to recover it. 00:24:49.887 [2024-05-16 20:23:36.935788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.887 [2024-05-16 20:23:36.935814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.935925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.935954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.936080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.936109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.936228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.936258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.936384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.936414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.936539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.936570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.936674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.936728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.936887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.936926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.937060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.937108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.937223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.937274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.937423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.937469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.937559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.937585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.937724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.937748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.937866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.937923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.938956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.938982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.939146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.939340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.939467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.939594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.939730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.939880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.939976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.940133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.940280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.940454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.940594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.940750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.940898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.940925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.941055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.941277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.941450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.941642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.941769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.941886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.941976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.942927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.942955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.943058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.943101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.943213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.943239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.943410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.943455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.943569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.943598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.943709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.943738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.943878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.943904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.944872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.944900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.945010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.945036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.945136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.945179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.945285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.945311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.945412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.945441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.888 qpair failed and we were unable to recover it. 00:24:49.888 [2024-05-16 20:23:36.945588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.888 [2024-05-16 20:23:36.945617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.945739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.945768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.945905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.945932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.946845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.946900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.947963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.947987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.948096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.948121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.948243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.948271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.948380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.948423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.948528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.948559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.948682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.948720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.948868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.948896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.949038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.949083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.949207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.949236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.949361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.949390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.949537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.949588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.949742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.949773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.949881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.949908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.950021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.950047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.950179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.950215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.950425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.950455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.950605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.950635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.950774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.950802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.950896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.950922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.951033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.951057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.951237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.951285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.951401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.951448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.951594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.951642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.951765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.951793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.951894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.951920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.952036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.952061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.952138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.952163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.952305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.952339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.952491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.952537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.952638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.952669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.952837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.952907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.953077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.953212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.953398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.953597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.953759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.953899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.953983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.954010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.954091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.954118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.954226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.954251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.954373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.954401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.954495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.954526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.954684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.889 [2024-05-16 20:23:36.954712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.889 qpair failed and we were unable to recover it. 00:24:49.889 [2024-05-16 20:23:36.954828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.954864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.954963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.954991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.955095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.955123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.955277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.955324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.955466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.955512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.955626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.955650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.955739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.955767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.955870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.955909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.956965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.956992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.957097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.957126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.957247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.957296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.957436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.957490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.957589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.957617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.957704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.957733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.957866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.957910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.958048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.958189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.958380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.958560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.958717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.958897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.958984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.959971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.959996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.960083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.960108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.960266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.960294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.960416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.960445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.960541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.960570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:49.890 qpair failed and we were unable to recover it. 00:24:49.890 [2024-05-16 20:23:36.960663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.890 [2024-05-16 20:23:36.960705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.960820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.960846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.960979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.961871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.961956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.962881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.962924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.963955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.963980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.964093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.964122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.964244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.964271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.173 qpair failed and we were unable to recover it. 00:24:50.173 [2024-05-16 20:23:36.964392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.173 [2024-05-16 20:23:36.964418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.964577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.964606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.964696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.964725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.964862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.964889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.964979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.965960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.965985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.966836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.966868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.967923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.967952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.968098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.968126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.968307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.968356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.968508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.968560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.968647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.968674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.968773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.968816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.968960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.968990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.969924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.174 [2024-05-16 20:23:36.969952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.174 qpair failed and we were unable to recover it. 00:24:50.174 [2024-05-16 20:23:36.970071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.970271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.970395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.970503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.970640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.970765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.970893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.970932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.971875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.971901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.972898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.972927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.973970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.973996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.974152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.974180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.974269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.974298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.974415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.974467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.974567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.974595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.974728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.974759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.974904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.974931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.975070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.975098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.975228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.975275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.975411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.975446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.975568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.175 [2024-05-16 20:23:36.975596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.175 qpair failed and we were unable to recover it. 00:24:50.175 [2024-05-16 20:23:36.975739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.975767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.975884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.975911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.976952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.976979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.977951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.977990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.978135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.978165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.978311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.978339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.978437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.978465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.978641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.978691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.978809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.978835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.978922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.978947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.979920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.979950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.980069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.980094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.980239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.980268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.980356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.980384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.980531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.980575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.980684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.980713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.980827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.980858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.981002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.981031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.981173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.981221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.981332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.981381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.981567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.981596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.176 qpair failed and we were unable to recover it. 00:24:50.176 [2024-05-16 20:23:36.981710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.176 [2024-05-16 20:23:36.981739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.981866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.981893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.982962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.982989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.983129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.983176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.983311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.983356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.983490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.983526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.983632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.983659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.983777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.983802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.983918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.983957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.984096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.984127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.984282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.984332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.984483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.984531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.984641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.984668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.984760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.984784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.984921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.984952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.985859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.985904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.986012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.986037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.986202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.986230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.986369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.986418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.986558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.177 [2024-05-16 20:23:36.986607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.177 qpair failed and we were unable to recover it. 00:24:50.177 [2024-05-16 20:23:36.986716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.986743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.986868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.986894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.987890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.987918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.988085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.988230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.988355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.988536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.988659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.988836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.988995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.989136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.989311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.989498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.989653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.989795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.989925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.989964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.990063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.990090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.990225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.990254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.990407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.990437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.990613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.990662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.990792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.990820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.990937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.990964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.991933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.991961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.992048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.992074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.992170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.992212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.992341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.992385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.992467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.992495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.178 [2024-05-16 20:23:36.992603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.178 [2024-05-16 20:23:36.992631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.178 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.992792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.992831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.992989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.993032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.993139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.993168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.993310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.993368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.993518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.993546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.993695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.993723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.993842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.993878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.994037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.994086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.994271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.994301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.994504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.994551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.994670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.994695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.994788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.994813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.994908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.994935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.995922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.995949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.996848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.996911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.997054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.997100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.997230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.997279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.997444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.997490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.997635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.997682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.997792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.997817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.997954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.997984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.998133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.998161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.998279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.998306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.998470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.998516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.998605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.998633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.998781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.179 [2024-05-16 20:23:36.998808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.179 qpair failed and we were unable to recover it. 00:24:50.179 [2024-05-16 20:23:36.998946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.998972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:36.999951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:36.999990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.000079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.000106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.000223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.000253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.000393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.000442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.000565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.000604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.000738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.000766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.000882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.000909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.001947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.001972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.002905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.002932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.003021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.003064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.003212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.003241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.003401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.003431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.003577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.003606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.003722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.003750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.003886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.003925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.004044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.004072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.180 [2024-05-16 20:23:37.004211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-05-16 20:23:37.004238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.180 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.004346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.004384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.004538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.004574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.004707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.004731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.004816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.004842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.004947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.004972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.005105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.005145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.005327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.005355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.005475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.005507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.005615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.005640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.005752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.005776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.005892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.005918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.006900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.006988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.007014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.007124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.007150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.007256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.007300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.007439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.007469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.241 [2024-05-16 20:23:37.007573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.241 [2024-05-16 20:23:37.007603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.241 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.007695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.007722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.007848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.007884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.007970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.007994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.008929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.008956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.009134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.009294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.009418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.009599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.009709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.009873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.009975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.010000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.010117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.010141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.010271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.010299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.010484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.010512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.010627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.010655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.010804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.010832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.011018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.011046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.011135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.242 [2024-05-16 20:23:37.011163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.242 qpair failed and we were unable to recover it. 00:24:50.242 [2024-05-16 20:23:37.011268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.011297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.011431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.011459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.011579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.011610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.011706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.011737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.011823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.011880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.011991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.012163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.012316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.012454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.012584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.012745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.012887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.012912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.013890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.013920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.014958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.014986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.015073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.015098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.015213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.015239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.243 qpair failed and we were unable to recover it. 00:24:50.243 [2024-05-16 20:23:37.015352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.243 [2024-05-16 20:23:37.015376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.015482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.015507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.015609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.015636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.015771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.015797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.015905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.015930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.016968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.016993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.017081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.017113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.017235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.017259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.017389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.017416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.017549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.017574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.017682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.017707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.017843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.017878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.018925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.018951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.244 [2024-05-16 20:23:37.019064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.244 [2024-05-16 20:23:37.019090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.244 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.019219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.019247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.019354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.019379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.019490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.019516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.019652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.019677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.019787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.019813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.019916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.019943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.020909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.020939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.021958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.021984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.022942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.022968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.023089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.023115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.023229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.023256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.023364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.023391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.023535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.023578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.023712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.245 [2024-05-16 20:23:37.023738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.245 qpair failed and we were unable to recover it. 00:24:50.245 [2024-05-16 20:23:37.023823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.023847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.023991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.024875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.024993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.025132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.025300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.025437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.025551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.025727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.025897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.025924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.026965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.026991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.027077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.027103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.246 [2024-05-16 20:23:37.027241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.246 [2024-05-16 20:23:37.027266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.246 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.027394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.027424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.027536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.027561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.027678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.027702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.027785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.027811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.027951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.027977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.028968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.028993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.029150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.029281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.029426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.029531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.029667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.029845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.029985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.030882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.030990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.031017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.031103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.031128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.031226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.031256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.031358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.031383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.031501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.247 [2024-05-16 20:23:37.031529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.247 qpair failed and we were unable to recover it. 00:24:50.247 [2024-05-16 20:23:37.031698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.031724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.031806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.031831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.031942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.031967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.032907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.032952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.033896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.033923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.034030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.034068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.034232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.034262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.034388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.034414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.034536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.034562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.034657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.034685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.248 qpair failed and we were unable to recover it. 00:24:50.248 [2024-05-16 20:23:37.034792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.248 [2024-05-16 20:23:37.034818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.034961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.034987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.035096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.035120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.035259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.035284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.035422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.035447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.035601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.035626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.035719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.035748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.035859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.035888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.036873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.036901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.037945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.037972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.038898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.038986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.039098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.039261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.039424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.039591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.039721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.039879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.039906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.040905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.040931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.041885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.041974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.042000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.042120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.042145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.249 qpair failed and we were unable to recover it. 00:24:50.249 [2024-05-16 20:23:37.042221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.249 [2024-05-16 20:23:37.042246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.042355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.042383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.042486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.042512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.042626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.042654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.042804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.042831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.043862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.043888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.044950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.044975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.045928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.045957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.046954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.046981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.047907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.047934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.048952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.048978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.049107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.049278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.049430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.049545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.049700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.049906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.049999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.050189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.050363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.050523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.050637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.050757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.050913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.050943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.250 qpair failed and we were unable to recover it. 00:24:50.250 [2024-05-16 20:23:37.051919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.250 [2024-05-16 20:23:37.051947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.052938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.052965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.053964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.053990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.054097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.054123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.054280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.054309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.054492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.054521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.054643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.054672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.054769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.054797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.054901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.054928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.055839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.055874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.056846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.056991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.057096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.057291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.057522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.057670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.057839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.057956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.057982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.058075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.058100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.058228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.058257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.058367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.058407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.058533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.058561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.058695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.058737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.058880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.058907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.059046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.059206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.059390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.059543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.059678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.059862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.059984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.060924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.060951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.061036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.251 [2024-05-16 20:23:37.061061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.251 qpair failed and we were unable to recover it. 00:24:50.251 [2024-05-16 20:23:37.061229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.061275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.061420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.061468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.061581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.061610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.061740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.061765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.061856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.061882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.061997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.062129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.062278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.062434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.062621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.062786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.062951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.062978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.063138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.063296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.063448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.063601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.063728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.063877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.063995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.064127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.064277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.064434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.064581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.064732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.064908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.064947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.065067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.065094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.065231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.065273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.065403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.065446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.065610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.065654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.065740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.065765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.065875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.065914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.066013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.066042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.066157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.066183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.066294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.066321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.068981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.069020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.069148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.069193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.069382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.069430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.069588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.069629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.069729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.069757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.069868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.069895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.070905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.070931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.071072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.071097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.071271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.071300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.071499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.071528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.071647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.071675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.071778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.071803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.071920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.071946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.072029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.072055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.072210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.072239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.072385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.072434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.072538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.072564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.072671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.072700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.072843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.072893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.073959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.073986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.074068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.074092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.074220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.074265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.074372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.074396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.252 [2024-05-16 20:23:37.074509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.252 [2024-05-16 20:23:37.074536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.252 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.074634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.074673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.074787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.074814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.074905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.074931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.075929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.075956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.076851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.076884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.077889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.077927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.078889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.078914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.079881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.079927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.080091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.080219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.080335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.080503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.080642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.080824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.080993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.081884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.081994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.082930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.082959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.083072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.083097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.083190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.083215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.083366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.083393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.083507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.083537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.083647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.083673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.083831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.083865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.084936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.084962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.253 [2024-05-16 20:23:37.085043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.253 [2024-05-16 20:23:37.085069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.253 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.085200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.085229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.085389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.085415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.085526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.085552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.085730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.085756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.085868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.085894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.086027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.086161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.086326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.086505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.086650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.086802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.086981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.087165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.087391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.087530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.087692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.087862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.087972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.087998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.088107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.088135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.088250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.088276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.088419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.088448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.088579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.088608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.088749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.088775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.088911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.088938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.089909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.089935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.090832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.090866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.091046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.091226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.091386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.091570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.091723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.091866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.091978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.092886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.092976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.093872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.093987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.094931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.094958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.095064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.254 [2024-05-16 20:23:37.095090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.254 qpair failed and we were unable to recover it. 00:24:50.254 [2024-05-16 20:23:37.095207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.095232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.095366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.095391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.095474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.095504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.095612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.095637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.095743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.095769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.095859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.095884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.095991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.096016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.096316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.096342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.096444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.096471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.096634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.096659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.096760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.096797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.096924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.096961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.097942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.097966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.098970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.098996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.099922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.099949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.100864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.100977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.101971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.101996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.102919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.102945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.103906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.103934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.104043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.255 [2024-05-16 20:23:37.104069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.255 qpair failed and we were unable to recover it. 00:24:50.255 [2024-05-16 20:23:37.104156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.104181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.104280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.104309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.104400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.104429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.104526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.104558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.104717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.104766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.104882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.104910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 wit/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 319870 Killed "${NVMF_APP[@]}" "$@" 00:24:50.256 h addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.105912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.105939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.106036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.106145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.106299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:50.256 [2024-05-16 20:23:37.106445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.106589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.106724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:50.256 [2024-05-16 20:23:37.106839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.106956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.106982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.256 [2024-05-16 20:23:37.107083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.107111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.107208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:50.256 [2024-05-16 20:23:37.107236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.107357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.107385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.107476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.256 [2024-05-16 20:23:37.107504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.107609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.107637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.107752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.107796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.107938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.107977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.108952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.108995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.109134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.109165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.109287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.109338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.109489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.109537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.109623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.109663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.109780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.109807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.109890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.109916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.110898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.110924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.111892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.111988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.112124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.112278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.112404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.112539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.112694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=320431 00:24:50.256 [2024-05-16 20:23:37.112836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:50.256 [2024-05-16 20:23:37.112968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.112998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 320431 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.113106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.113144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.256 [2024-05-16 20:23:37.113234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.256 [2024-05-16 20:23:37.113259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.256 qpair failed and we were unable to recover it. 00:24:50.257 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 320431 ']' 00:24:50.257 [2024-05-16 20:23:37.113351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.113376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.257 [2024-05-16 20:23:37.113494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.113521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:50.257 [2024-05-16 20:23:37.113639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.113666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.257 [2024-05-16 20:23:37.113750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.113779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.113877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.113912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b9 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:50.257 0 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 20:23:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.257 [2024-05-16 20:23:37.114119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.114839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.114966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.115014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.115117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.115154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.115343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.115390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.115500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.115533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.115675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.115703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.115825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.115865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.115998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.116883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.116988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.117849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.117991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.118846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.118987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.119942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.119969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.120100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.120224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.120345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.120493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.120709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.120884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.120999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.121158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.121306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.121491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.121645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.121788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.121915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.121942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.122937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.122964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.123054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.123082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.123165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.123207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.123343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.123368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.123473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.123501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.123656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.257 [2024-05-16 20:23:37.123685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.257 qpair failed and we were unable to recover it. 00:24:50.257 [2024-05-16 20:23:37.123814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.123840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.123946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.123972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.124927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.124966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.125881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.125921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.126896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.126922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.127882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.127911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.128969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.128995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.129146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.129284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.129405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.129603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.129725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.129891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.129981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.130142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.130290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.130420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.130581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.130733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.130876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.130902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.131954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.131980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.132955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.132981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.133926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.133957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.134098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.258 [2024-05-16 20:23:37.134123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.258 qpair failed and we were unable to recover it. 00:24:50.258 [2024-05-16 20:23:37.134227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.134255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.134350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.134394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.134484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.134512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.134604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.134632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.134758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.134784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.134863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.134890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.134980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.135955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.135982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.136939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.136968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.137930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.137956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.138887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.138981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.139883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.139988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.140894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.140989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.141956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.141984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.142951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.142978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.143099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.143126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.143234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.143264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.143358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.143388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.143477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.259 [2024-05-16 20:23:37.143506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.259 qpair failed and we were unable to recover it. 00:24:50.259 [2024-05-16 20:23:37.143598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.143626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.143722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.143749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.143905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.143944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.144958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.144994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.145886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.145977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.146110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.146280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.146456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.146612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.146752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.146925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.146952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.147863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.147889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.148930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.148976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.149972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.149998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.150949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.150975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.151951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.151979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.260 [2024-05-16 20:23:37.152847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.260 qpair failed and we were unable to recover it. 00:24:50.260 [2024-05-16 20:23:37.152959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.152987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.153951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.153979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.154880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.154907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.155902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.155932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.156960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.156989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.157963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.157989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.158898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.158982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.159883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.159928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.160011] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:50.261 [2024-05-16 20:23:37.160055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.160088] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-05-16 20:23:37.160089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b95 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.261 0 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.160226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.261 [2024-05-16 20:23:37.160254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.261 qpair failed and we were unable to recover it. 00:24:50.261 [2024-05-16 20:23:37.160348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.160375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.160500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.160527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.160623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.160650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.160786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.160830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.160960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.160989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.161932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.161959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.162924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.162951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.163957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.163984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.164901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.164930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.165937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.165965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.166897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.166986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.167959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.167987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.262 [2024-05-16 20:23:37.168888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.262 [2024-05-16 20:23:37.168916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.262 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.168999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.169902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.169928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.170954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.170982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.171948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.171987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.172926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.172955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.173891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.173986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.174884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.174910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.175919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.175947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.176967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.263 [2024-05-16 20:23:37.176993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-05-16 20:23:37.177078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.177936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.177963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.178865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.178977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.179912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.179998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.180968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.180994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.181879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.181906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.182903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.182929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.183935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.183962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.184075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.184100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.184214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.264 [2024-05-16 20:23:37.184245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.264 qpair failed and we were unable to recover it. 00:24:50.264 [2024-05-16 20:23:37.184329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.184356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.184494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.184523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.184610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.184637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.184758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.184784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.184877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.184904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.184989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.185886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.185915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.186880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.186997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.187964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.187992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.188967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.188993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.189927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.189953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.190932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.190960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.191913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.191994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.192039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.192128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.192156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.192280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.192307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.192425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.265 [2024-05-16 20:23:37.192453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.265 qpair failed and we were unable to recover it. 00:24:50.265 [2024-05-16 20:23:37.192576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.192605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.192704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.192730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.192814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.192839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.192967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.192993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.193099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.193247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.193381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.193514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.193657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.193785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.266 [2024-05-16 20:23:37.193955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.193981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.194929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.194955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.195869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.195989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.196873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.196900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.197872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.197899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.198905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.198934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.199960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.199986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.266 qpair failed and we were unable to recover it. 00:24:50.266 [2024-05-16 20:23:37.200096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.266 [2024-05-16 20:23:37.200125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.200235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.200260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.200357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.200385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.200493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.200518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.200611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.200635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.200744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.200769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.200874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.200900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.201962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.201987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.202913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.202939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.203882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.203908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.204947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.204974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.267 qpair failed and we were unable to recover it. 00:24:50.267 [2024-05-16 20:23:37.205058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.267 [2024-05-16 20:23:37.205083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.205917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.205943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.206933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.206959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.207912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.207938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.208825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.208974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.268 [2024-05-16 20:23:37.209777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.268 [2024-05-16 20:23:37.209802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.268 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.209920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.209946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.210897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.210977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.211902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.211927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.212931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.212957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.213906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.213933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.214016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.214042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.214130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.214155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.214267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.214292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.214371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.214397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.214481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.269 [2024-05-16 20:23:37.214506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.269 qpair failed and we were unable to recover it. 00:24:50.269 [2024-05-16 20:23:37.214619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.214648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.214747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.214786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.214921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.214954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.215890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.215976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.216891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.216977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.217931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.217962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.218917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.218943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.219028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.219054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.219145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.219171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.270 [2024-05-16 20:23:37.219285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.270 [2024-05-16 20:23:37.219310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.270 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.219416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.219442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.219529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.219554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.219633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.219658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.219774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.219799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.219887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.219914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.220905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.220986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.221891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.221918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.222874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.222902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.223000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.223025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.223115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.223140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.223226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.223251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.271 [2024-05-16 20:23:37.223360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.271 [2024-05-16 20:23:37.223386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.271 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.223467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.223495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.223586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.223612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.223690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.223715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.223828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.223861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.223943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.223969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.224835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.224874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.225902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.225929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.226926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.226952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.227916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.227942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.228022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.228048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.272 [2024-05-16 20:23:37.228135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.272 [2024-05-16 20:23:37.228160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.272 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.228246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.228273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.228350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.228375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.228458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.228484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.228570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.228596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.228717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.228746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.273 [2024-05-16 20:23:37.228756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.228878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.228906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.229954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.229981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.230970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.230995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.231965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.231991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.273 qpair failed and we were unable to recover it. 00:24:50.273 [2024-05-16 20:23:37.232967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.273 [2024-05-16 20:23:37.232993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.233873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.233990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.234877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.234904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.235956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.235984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.236931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.236958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.237931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.237957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.238092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.274 [2024-05-16 20:23:37.238117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.274 qpair failed and we were unable to recover it. 00:24:50.274 [2024-05-16 20:23:37.238225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.238251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.238343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.238370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.238476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.238502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.238586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.238612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.238720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.238746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.238861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.238887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.238977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.239893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.239919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.240870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.240897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.241886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.241990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.242925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.242951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.243062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.243088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.243203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.275 [2024-05-16 20:23:37.243228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.275 qpair failed and we were unable to recover it. 00:24:50.275 [2024-05-16 20:23:37.243346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.243374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.243515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.243543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.243650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.243675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.243767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.243792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.243936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.243963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.244893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.244921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.245961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.245987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.246938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.246978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.247099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.247132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.247245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.247272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.247356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.247383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.247465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.247491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.276 qpair failed and we were unable to recover it. 00:24:50.276 [2024-05-16 20:23:37.247601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.276 [2024-05-16 20:23:37.247628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.247707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.247735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.247814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.247840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.247926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.247951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.248896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.248922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.249895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.249982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.250932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.250959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.251916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.251942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.252022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.252048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.252136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.252161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.252233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.252258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.277 [2024-05-16 20:23:37.252345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.277 [2024-05-16 20:23:37.252370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.277 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.252446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.252471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.252551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.252576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.252715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.252740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.252824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.252850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.252961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.252987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.253865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.253892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.254905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.254934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.255970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.255995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.256932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.256958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.257041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.257067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.257191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.257218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.257295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.257321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.257408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.257433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.278 qpair failed and we were unable to recover it. 00:24:50.278 [2024-05-16 20:23:37.257527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.278 [2024-05-16 20:23:37.257566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.257689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.257715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.257819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.257844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.257968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.257995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.258956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.258991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.259907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.259989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.260892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.260919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.261912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.261939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.262030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.262057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.262164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.262190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.262274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.262300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.262384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.262415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.279 [2024-05-16 20:23:37.262509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.279 [2024-05-16 20:23:37.262537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.279 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.262651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.262678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.262789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.262815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.262900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.262926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.263843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.263971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.264863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.264889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.265962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.265990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.266845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.280 [2024-05-16 20:23:37.266878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.280 qpair failed and we were unable to recover it. 00:24:50.280 [2024-05-16 20:23:37.267007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.267914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.267942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.268930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.268956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.269928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.269967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.270912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.270938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.281 qpair failed and we were unable to recover it. 00:24:50.281 [2024-05-16 20:23:37.271897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.281 [2024-05-16 20:23:37.271925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.272972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.272999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.273966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.273991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.274916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.274998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.275967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.275994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.282 [2024-05-16 20:23:37.276876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.282 [2024-05-16 20:23:37.276903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.282 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.276992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.277922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.277950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.278957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.278983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.279964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.279992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.280876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.280923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.283 [2024-05-16 20:23:37.281950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.283 [2024-05-16 20:23:37.281976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.283 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.282958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.282983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.283917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.283999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.284883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.284993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.285886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.285915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.284 [2024-05-16 20:23:37.286848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.284 qpair failed and we were unable to recover it. 00:24:50.284 [2024-05-16 20:23:37.286978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.287923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.287948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.288917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.288943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.289961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.289986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.290903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.290993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.291020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.291135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.285 [2024-05-16 20:23:37.291161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.285 qpair failed and we were unable to recover it. 00:24:50.285 [2024-05-16 20:23:37.291254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.291282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.291392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.291418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.291568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.291608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.291747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.291786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.291879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.291911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.292901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.292928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.293046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.293072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.293158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.293183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.286 [2024-05-16 20:23:37.293273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.286 [2024-05-16 20:23:37.293303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.286 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.293416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.293442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.293529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.293554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.293697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.293723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 A controller has encountered a failure and is being reset. 00:24:50.544 [2024-05-16 20:23:37.293830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.293866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.293988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.294015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.294095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.294120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.294207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.294232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.294319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.294345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.294435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.294463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.294564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.544 [2024-05-16 20:23:37.294589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1789f90 with addr=10.0.0.2, port=4420 00:24:50.544 qpair failed and we were unable to recover it. 00:24:50.544 [2024-05-16 20:23:37.294703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.294728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.294814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.294839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.294932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.294961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.295917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.295998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.296974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.296999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff270000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.297938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.297965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.298046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.298072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.298149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.298174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff278000b90 with addr=10.0.0.2, port=4420 00:24:50.545 qpair failed and we were unable to recover it. 00:24:50.545 [2024-05-16 20:23:37.298287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-05-16 20:23:37.298327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1797bb0 with addr=10.0.0.2, port=4420 00:24:50.545 [2024-05-16 20:23:37.298346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1797bb0 is same with the state(5) to be set 00:24:50.545 [2024-05-16 20:23:37.298371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1797bb0 (9): Bad file descriptor 00:24:50.545 [2024-05-16 20:23:37.298390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.545 [2024-05-16 20:23:37.298404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.545 [2024-05-16 20:23:37.298429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.545 Unable to reset the controller. 00:24:50.545 [2024-05-16 20:23:37.342591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.545 [2024-05-16 20:23:37.342662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.545 [2024-05-16 20:23:37.342692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.545 [2024-05-16 20:23:37.342705] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.545 [2024-05-16 20:23:37.342715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.545 [2024-05-16 20:23:37.342795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:50.545 [2024-05-16 20:23:37.342846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:50.545 [2024-05-16 20:23:37.343088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:50.545 [2024-05-16 20:23:37.343092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 Malloc0 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 [2024-05-16 20:23:38.133374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 [2024-05-16 20:23:38.161377] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:51.110 [2024-05-16 20:23:38.161635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.110 20:23:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 319972 00:24:51.368 Controller properly reset. 00:24:56.636 Initializing NVMe Controllers 00:24:56.636 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:56.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:56.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:56.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:56.637 Initialization complete. Launching workers. 00:24:56.637 Starting thread on core 1 00:24:56.637 Starting thread on core 2 00:24:56.637 Starting thread on core 3 00:24:56.637 Starting thread on core 0 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:56.637 00:24:56.637 real 0m10.714s 00:24:56.637 user 0m34.489s 00:24:56.637 sys 0m7.657s 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.637 ************************************ 00:24:56.637 END TEST nvmf_target_disconnect_tc2 00:24:56.637 ************************************ 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.637 rmmod nvme_tcp 00:24:56.637 rmmod nvme_fabrics 00:24:56.637 rmmod nvme_keyring 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 320431 ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 320431 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 320431 ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 320431 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 320431 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 320431' 00:24:56.637 killing process with pid 320431 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 320431 00:24:56.637 [2024-05-16 20:23:43.451914] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 320431 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.637 20:23:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.171 20:23:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:59.171 00:24:59.171 real 0m15.444s 00:24:59.171 user 0m59.789s 00:24:59.171 sys 0m10.040s 00:24:59.171 20:23:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:59.171 20:23:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 ************************************ 00:24:59.171 END TEST nvmf_target_disconnect 00:24:59.171 ************************************ 00:24:59.171 20:23:45 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:24:59.171 20:23:45 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.171 20:23:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 20:23:45 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:59.171 00:24:59.171 real 19m10.839s 00:24:59.171 user 46m14.592s 00:24:59.171 sys 4m38.876s 00:24:59.171 20:23:45 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:59.171 20:23:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 ************************************ 00:24:59.171 END TEST nvmf_tcp 00:24:59.171 ************************************ 00:24:59.171 20:23:45 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:24:59.171 20:23:45 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:59.171 20:23:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:59.171 20:23:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.171 20:23:45 -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 ************************************ 00:24:59.171 START TEST spdkcli_nvmf_tcp 00:24:59.171 ************************************ 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:59.171 * Looking for test storage... 00:24:59.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=321633 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 321633 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 321633 ']' 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:59.171 20:23:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 [2024-05-16 20:23:45.976580] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:59.172 [2024-05-16 20:23:45.976675] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321633 ] 00:24:59.172 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.172 [2024-05-16 20:23:46.034745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:59.172 [2024-05-16 20:23:46.141185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.172 [2024-05-16 20:23:46.141188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.172 20:23:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:59.172 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:59.172 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:59.172 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:59.172 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:59.172 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:59.172 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:59.172 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.172 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.172 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:59.172 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:59.172 ' 00:25:02.455 [2024-05-16 20:23:48.931230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.020 [2024-05-16 20:23:50.158995] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:03.020 [2024-05-16 20:23:50.159583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:05.546 [2024-05-16 20:23:52.438592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:07.445 [2024-05-16 20:23:54.400834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:08.820 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:08.820 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:08.820 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:08.820 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:08.820 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:08.820 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:08.820 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:08.820 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:08.820 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:08.820 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:08.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:08.820 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:09.078 20:23:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:09.336 20:23:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.594 20:23:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:09.594 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:09.594 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:09.594 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:09.594 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:09.594 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:09.594 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:09.594 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:09.594 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:09.594 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:09.594 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:09.594 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:09.594 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:09.594 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:09.594 ' 00:25:14.853 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:14.853 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:14.853 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:14.853 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:14.853 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:14.853 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:14.853 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:14.853 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:14.853 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:14.853 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:14.853 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:14.853 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:14.853 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:14.853 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 321633 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 321633 ']' 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 321633 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 321633 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 321633' 00:25:14.853 killing process with pid 321633 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 321633 00:25:14.853 [2024-05-16 20:24:01.758252] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:14.853 20:24:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 321633 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 321633 ']' 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 321633 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 321633 ']' 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 321633 00:25:15.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (321633) - No such process 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 321633 is not found' 00:25:15.112 Process with pid 321633 is not found 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:15.112 00:25:15.112 real 0m16.154s 00:25:15.112 user 0m34.066s 00:25:15.112 sys 0m0.899s 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:15.112 20:24:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.112 ************************************ 00:25:15.112 END TEST spdkcli_nvmf_tcp 00:25:15.112 ************************************ 00:25:15.112 20:24:02 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:15.112 20:24:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:15.112 20:24:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:15.112 20:24:02 -- common/autotest_common.sh@10 -- # set +x 00:25:15.112 ************************************ 00:25:15.112 START TEST nvmf_identify_passthru 00:25:15.112 ************************************ 00:25:15.112 20:24:02 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:15.112 * Looking for test storage... 00:25:15.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:15.112 20:24:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.112 20:24:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.112 20:24:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.112 20:24:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.112 20:24:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.112 20:24:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.112 20:24:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.112 20:24:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:15.112 20:24:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.112 20:24:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.112 20:24:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:15.112 20:24:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:15.112 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:15.113 20:24:02 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.113 20:24:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.011 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.011 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:17.011 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:17.011 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:17.011 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:17.012 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:17.012 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:17.012 Found net devices under 0000:09:00.0: cvl_0_0 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:17.012 Found net devices under 0000:09:00.1: cvl_0_1 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.012 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:17.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:25:17.270 00:25:17.270 --- 10.0.0.2 ping statistics --- 00:25:17.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.270 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:25:17.270 00:25:17.270 --- 10.0.0.1 ping statistics --- 00:25:17.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.270 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:17.270 20:24:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:25:17.270 20:24:04 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:0b:00.0 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:17.270 20:24:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:17.270 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.454 20:24:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:25:21.454 20:24:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:21.454 20:24:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:21.454 20:24:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:21.454 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=326750 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.637 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 326750 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 326750 ']' 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:25.637 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.637 [2024-05-16 20:24:12.680422] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:25:25.637 [2024-05-16 20:24:12.680506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.637 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.637 [2024-05-16 20:24:12.744414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:25.895 [2024-05-16 20:24:12.856698] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.895 [2024-05-16 20:24:12.856759] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.895 [2024-05-16 20:24:12.856772] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.895 [2024-05-16 20:24:12.856782] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.895 [2024-05-16 20:24:12.856791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.895 [2024-05-16 20:24:12.856875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.895 [2024-05-16 20:24:12.856938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.895 [2024-05-16 20:24:12.857006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:25.895 [2024-05-16 20:24:12.857009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:25:25.895 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.895 INFO: Log level set to 20 00:25:25.895 INFO: Requests: 00:25:25.895 { 00:25:25.895 "jsonrpc": "2.0", 00:25:25.895 "method": "nvmf_set_config", 00:25:25.895 "id": 1, 00:25:25.895 "params": { 00:25:25.895 "admin_cmd_passthru": { 00:25:25.895 "identify_ctrlr": true 00:25:25.895 } 00:25:25.895 } 00:25:25.895 } 00:25:25.895 00:25:25.895 INFO: response: 00:25:25.895 { 00:25:25.895 "jsonrpc": "2.0", 00:25:25.895 "id": 1, 00:25:25.895 "result": true 00:25:25.895 } 00:25:25.895 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.895 20:24:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.895 20:24:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.895 INFO: Setting log level to 20 00:25:25.895 INFO: Setting log level to 20 00:25:25.895 INFO: Log level set to 20 00:25:25.895 INFO: Log level set to 20 00:25:25.895 INFO: Requests: 00:25:25.895 { 00:25:25.895 "jsonrpc": "2.0", 00:25:25.895 "method": "framework_start_init", 00:25:25.895 "id": 1 00:25:25.895 } 00:25:25.895 00:25:25.895 INFO: Requests: 00:25:25.895 { 00:25:25.895 "jsonrpc": "2.0", 00:25:25.895 "method": "framework_start_init", 00:25:25.895 "id": 1 00:25:25.895 } 00:25:25.895 00:25:25.895 [2024-05-16 20:24:13.008099] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:25.895 INFO: response: 00:25:25.895 { 00:25:25.895 "jsonrpc": "2.0", 00:25:25.895 "id": 1, 00:25:25.895 "result": true 00:25:25.895 } 00:25:25.895 00:25:25.895 INFO: response: 00:25:25.895 { 00:25:25.895 "jsonrpc": "2.0", 00:25:25.895 "id": 1, 00:25:25.895 "result": true 00:25:25.895 } 00:25:25.895 00:25:25.895 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.895 20:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.895 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.895 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.895 INFO: Setting log level to 40 00:25:25.895 INFO: Setting log level to 40 00:25:25.896 INFO: Setting log level to 40 00:25:25.896 [2024-05-16 20:24:13.018081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.896 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.896 20:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:25.896 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.896 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:26.153 20:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:25:26.153 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.153 20:24:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.433 Nvme0n1 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.433 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.433 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.433 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.433 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.433 [2024-05-16 20:24:15.910883] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:29.434 [2024-05-16 20:24:15.911168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.434 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.434 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:29.434 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.434 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.434 [ 00:25:29.434 { 00:25:29.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:29.434 "subtype": "Discovery", 00:25:29.434 "listen_addresses": [], 00:25:29.434 "allow_any_host": true, 00:25:29.434 "hosts": [] 00:25:29.434 }, 00:25:29.434 { 00:25:29.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.434 "subtype": "NVMe", 00:25:29.434 "listen_addresses": [ 00:25:29.434 { 00:25:29.434 "trtype": "TCP", 00:25:29.434 "adrfam": "IPv4", 00:25:29.434 "traddr": "10.0.0.2", 00:25:29.434 "trsvcid": "4420" 00:25:29.434 } 00:25:29.434 ], 00:25:29.434 "allow_any_host": true, 00:25:29.434 "hosts": [], 00:25:29.434 "serial_number": "SPDK00000000000001", 00:25:29.434 "model_number": "SPDK bdev Controller", 00:25:29.434 "max_namespaces": 1, 00:25:29.434 "min_cntlid": 1, 00:25:29.434 "max_cntlid": 65519, 00:25:29.434 "namespaces": [ 00:25:29.434 { 00:25:29.434 "nsid": 1, 00:25:29.434 "bdev_name": "Nvme0n1", 00:25:29.434 "name": "Nvme0n1", 00:25:29.434 "nguid": "9F6E1805460B48ADAC63517E0BF22ACE", 00:25:29.434 "uuid": "9f6e1805-460b-48ad-ac63-517e0bf22ace" 00:25:29.434 } 00:25:29.434 ] 00:25:29.434 } 00:25:29.434 ] 00:25:29.434 20:24:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.434 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:29.434 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:29.434 20:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:29.434 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:29.434 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:29.434 20:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.434 rmmod nvme_tcp 00:25:29.434 rmmod nvme_fabrics 00:25:29.434 rmmod nvme_keyring 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 326750 ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 326750 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 326750 ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 326750 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 326750 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 326750' 00:25:29.434 killing process with pid 326750 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 326750 00:25:29.434 [2024-05-16 20:24:16.366971] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:29.434 20:24:16 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 326750 00:25:30.808 20:24:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.808 20:24:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.808 20:24:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.808 20:24:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.808 20:24:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.808 20:24:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.808 20:24:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:30.808 20:24:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.341 20:24:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.341 00:25:33.341 real 0m17.875s 00:25:33.341 user 0m26.405s 00:25:33.341 sys 0m2.294s 00:25:33.341 20:24:19 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:33.341 20:24:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.341 ************************************ 00:25:33.341 END TEST nvmf_identify_passthru 00:25:33.341 ************************************ 00:25:33.341 20:24:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:33.341 20:24:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:33.341 20:24:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:33.341 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:33.341 ************************************ 00:25:33.341 START TEST nvmf_dif 00:25:33.341 ************************************ 00:25:33.341 20:24:19 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:33.341 * Looking for test storage... 00:25:33.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:33.341 20:24:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.341 20:24:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.341 20:24:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.341 20:24:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.341 20:24:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.341 20:24:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.341 20:24:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.341 20:24:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:33.341 20:24:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.341 20:24:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:33.341 20:24:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:33.341 20:24:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:33.341 20:24:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:33.341 20:24:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.341 20:24:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.342 20:24:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:33.342 20:24:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.342 20:24:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.342 20:24:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.342 20:24:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.342 20:24:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:35.244 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:35.244 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:35.244 Found net devices under 0000:09:00.0: cvl_0_0 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:35.244 Found net devices under 0000:09:00.1: cvl_0_1 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.244 20:24:22 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:35.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:25:35.244 00:25:35.244 --- 10.0.0.2 ping statistics --- 00:25:35.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.244 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:25:35.245 20:24:22 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:25:35.245 00:25:35.245 --- 10.0.0.1 ping statistics --- 00:25:35.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.245 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:35.245 20:24:22 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.245 20:24:22 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:35.245 20:24:22 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:35.245 20:24:22 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:36.180 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:36.180 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:36.180 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:36.180 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:36.180 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:36.180 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:36.180 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:36.180 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:36.180 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:36.180 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:36.180 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:36.180 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:36.180 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:36.180 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:36.180 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:36.180 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:36.180 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.439 20:24:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:36.439 20:24:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=330013 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:36.439 20:24:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 330013 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 330013 ']' 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:36.439 20:24:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:36.439 [2024-05-16 20:24:23.465413] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:25:36.439 [2024-05-16 20:24:23.465482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.439 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.439 [2024-05-16 20:24:23.536241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.706 [2024-05-16 20:24:23.653674] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.706 [2024-05-16 20:24:23.653736] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.706 [2024-05-16 20:24:23.653752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.706 [2024-05-16 20:24:23.653765] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.706 [2024-05-16 20:24:23.653777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.706 [2024-05-16 20:24:23.653807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:25:36.706 20:24:23 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 20:24:23 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.706 20:24:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:36.706 20:24:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 [2024-05-16 20:24:23.788735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.706 20:24:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 ************************************ 00:25:36.706 START TEST fio_dif_1_default 00:25:36.706 ************************************ 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 bdev_null0 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.706 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 [2024-05-16 20:24:23.848830] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:36.706 [2024-05-16 20:24:23.849094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:36.964 { 00:25:36.964 "params": { 00:25:36.964 "name": "Nvme$subsystem", 00:25:36.964 "trtype": "$TEST_TRANSPORT", 00:25:36.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.964 "adrfam": "ipv4", 00:25:36.964 "trsvcid": "$NVMF_PORT", 00:25:36.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.964 "hdgst": ${hdgst:-false}, 00:25:36.964 "ddgst": ${ddgst:-false} 00:25:36.964 }, 00:25:36.964 "method": "bdev_nvme_attach_controller" 00:25:36.964 } 00:25:36.964 EOF 00:25:36.964 )") 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:36.964 20:24:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:36.964 "params": { 00:25:36.964 "name": "Nvme0", 00:25:36.964 "trtype": "tcp", 00:25:36.964 "traddr": "10.0.0.2", 00:25:36.964 "adrfam": "ipv4", 00:25:36.964 "trsvcid": "4420", 00:25:36.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:36.965 "hdgst": false, 00:25:36.965 "ddgst": false 00:25:36.965 }, 00:25:36.965 "method": "bdev_nvme_attach_controller" 00:25:36.965 }' 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:36.965 20:24:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.223 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:37.223 fio-3.35 00:25:37.223 Starting 1 thread 00:25:37.223 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.423 00:25:49.423 filename0: (groupid=0, jobs=1): err= 0: pid=330241: Thu May 16 20:24:34 2024 00:25:49.423 read: IOPS=205, BW=820KiB/s (840kB/s)(8224KiB/10026msec) 00:25:49.423 slat (nsec): min=4545, max=31003, avg=9441.22, stdev=2262.27 00:25:49.423 clat (usec): min=524, max=48573, avg=19476.06, stdev=20316.02 00:25:49.423 lat (usec): min=532, max=48588, avg=19485.50, stdev=20315.95 00:25:49.423 clat percentiles (usec): 00:25:49.423 | 1.00th=[ 570], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 603], 00:25:49.423 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[41157], 00:25:49.423 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:49.423 | 99.00th=[41681], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:25:49.423 | 99.99th=[48497] 00:25:49.423 bw ( KiB/s): min= 704, max= 960, per=99.97%, avg=820.80, stdev=60.78, samples=20 00:25:49.423 iops : min= 176, max= 240, avg=205.20, stdev=15.20, samples=20 00:25:49.423 lat (usec) : 750=53.70% 00:25:49.423 lat (msec) : 50=46.30% 00:25:49.423 cpu : usr=90.26%, sys=9.45%, ctx=30, majf=0, minf=261 00:25:49.423 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:49.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.423 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.423 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:49.423 00:25:49.423 Run status group 0 (all jobs): 00:25:49.423 READ: bw=820KiB/s (840kB/s), 820KiB/s-820KiB/s (840kB/s-840kB/s), io=8224KiB (8421kB), run=10026-10026msec 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.423 00:25:49.423 real 0m11.166s 00:25:49.423 user 0m10.326s 00:25:49.423 sys 0m1.199s 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:49.423 20:24:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:49.423 ************************************ 00:25:49.423 END TEST fio_dif_1_default 00:25:49.423 ************************************ 00:25:49.423 20:24:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:49.423 20:24:35 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:49.423 20:24:35 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:49.423 20:24:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:49.423 ************************************ 00:25:49.423 START TEST fio_dif_1_multi_subsystems 00:25:49.423 ************************************ 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.423 bdev_null0 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.423 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.424 [2024-05-16 20:24:35.075195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.424 bdev_null1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:49.424 { 00:25:49.424 "params": { 00:25:49.424 "name": "Nvme$subsystem", 00:25:49.424 "trtype": "$TEST_TRANSPORT", 00:25:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.424 "adrfam": "ipv4", 00:25:49.424 "trsvcid": "$NVMF_PORT", 00:25:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.424 "hdgst": ${hdgst:-false}, 00:25:49.424 "ddgst": ${ddgst:-false} 00:25:49.424 }, 00:25:49.424 "method": "bdev_nvme_attach_controller" 00:25:49.424 } 00:25:49.424 EOF 00:25:49.424 )") 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:49.424 { 00:25:49.424 "params": { 00:25:49.424 "name": "Nvme$subsystem", 00:25:49.424 "trtype": "$TEST_TRANSPORT", 00:25:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.424 "adrfam": "ipv4", 00:25:49.424 "trsvcid": "$NVMF_PORT", 00:25:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.424 "hdgst": ${hdgst:-false}, 00:25:49.424 "ddgst": ${ddgst:-false} 00:25:49.424 }, 00:25:49.424 "method": "bdev_nvme_attach_controller" 00:25:49.424 } 00:25:49.424 EOF 00:25:49.424 )") 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:49.424 "params": { 00:25:49.424 "name": "Nvme0", 00:25:49.424 "trtype": "tcp", 00:25:49.424 "traddr": "10.0.0.2", 00:25:49.424 "adrfam": "ipv4", 00:25:49.424 "trsvcid": "4420", 00:25:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:49.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:49.424 "hdgst": false, 00:25:49.424 "ddgst": false 00:25:49.424 }, 00:25:49.424 "method": "bdev_nvme_attach_controller" 00:25:49.424 },{ 00:25:49.424 "params": { 00:25:49.424 "name": "Nvme1", 00:25:49.424 "trtype": "tcp", 00:25:49.424 "traddr": "10.0.0.2", 00:25:49.424 "adrfam": "ipv4", 00:25:49.424 "trsvcid": "4420", 00:25:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.424 "hdgst": false, 00:25:49.424 "ddgst": false 00:25:49.424 }, 00:25:49.424 "method": "bdev_nvme_attach_controller" 00:25:49.424 }' 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:49.424 20:24:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:49.424 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:49.424 fio-3.35 00:25:49.424 Starting 2 threads 00:25:49.424 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.393 00:25:59.393 filename0: (groupid=0, jobs=1): err= 0: pid=331643: Thu May 16 20:24:46 2024 00:25:59.393 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:25:59.393 slat (nsec): min=7112, max=80040, avg=9517.05, stdev=4540.59 00:25:59.393 clat (usec): min=40803, max=41222, avg=40971.10, stdev=45.97 00:25:59.393 lat (usec): min=40810, max=41259, avg=40980.61, stdev=46.39 00:25:59.393 clat percentiles (usec): 00:25:59.393 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:25:59.393 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:59.393 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:59.393 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:59.393 | 99.99th=[41157] 00:25:59.393 bw ( KiB/s): min= 384, max= 416, per=49.63%, avg=388.80, stdev=11.72, samples=20 00:25:59.393 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:25:59.393 lat (msec) : 50=100.00% 00:25:59.393 cpu : usr=94.38%, sys=5.32%, ctx=15, majf=0, minf=168 00:25:59.393 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.393 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.393 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:59.393 filename1: (groupid=0, jobs=1): err= 0: pid=331644: Thu May 16 20:24:46 2024 00:25:59.393 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10007msec) 00:25:59.393 slat (nsec): min=5234, max=29714, avg=9448.71, stdev=3505.55 00:25:59.393 clat (usec): min=1465, max=41627, avg=40812.10, stdev=2520.16 00:25:59.393 lat (usec): min=1471, max=41649, avg=40821.55, stdev=2520.03 00:25:59.393 clat percentiles (usec): 00:25:59.393 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:59.393 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:59.393 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:59.393 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:59.393 | 99.99th=[41681] 00:25:59.393 bw ( KiB/s): min= 384, max= 416, per=49.88%, avg=390.40, stdev=13.13, samples=20 00:25:59.393 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:25:59.393 lat (msec) : 2=0.41%, 50=99.59% 00:25:59.393 cpu : usr=94.96%, sys=4.74%, ctx=25, majf=0, minf=109 00:25:59.393 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.393 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.393 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:59.393 00:25:59.393 Run status group 0 (all jobs): 00:25:59.393 READ: bw=782KiB/s (801kB/s), 390KiB/s-392KiB/s (400kB/s-401kB/s), io=7824KiB (8012kB), run=10005-10007msec 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.393 00:25:59.393 real 0m11.453s 00:25:59.393 user 0m20.388s 00:25:59.393 sys 0m1.298s 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:59.393 20:24:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:59.393 ************************************ 00:25:59.393 END TEST fio_dif_1_multi_subsystems 00:25:59.393 ************************************ 00:25:59.393 20:24:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:59.393 20:24:46 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:59.393 20:24:46 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:59.393 20:24:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 ************************************ 00:25:59.651 START TEST fio_dif_rand_params 00:25:59.651 ************************************ 00:25:59.651 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:59.652 bdev_null0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:59.652 [2024-05-16 20:24:46.574160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:59.652 { 00:25:59.652 "params": { 00:25:59.652 "name": "Nvme$subsystem", 00:25:59.652 "trtype": "$TEST_TRANSPORT", 00:25:59.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.652 "adrfam": "ipv4", 00:25:59.652 "trsvcid": "$NVMF_PORT", 00:25:59.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.652 "hdgst": ${hdgst:-false}, 00:25:59.652 "ddgst": ${ddgst:-false} 00:25:59.652 }, 00:25:59.652 "method": "bdev_nvme_attach_controller" 00:25:59.652 } 00:25:59.652 EOF 00:25:59.652 )") 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:59.652 "params": { 00:25:59.652 "name": "Nvme0", 00:25:59.652 "trtype": "tcp", 00:25:59.652 "traddr": "10.0.0.2", 00:25:59.652 "adrfam": "ipv4", 00:25:59.652 "trsvcid": "4420", 00:25:59.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:59.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:59.652 "hdgst": false, 00:25:59.652 "ddgst": false 00:25:59.652 }, 00:25:59.652 "method": "bdev_nvme_attach_controller" 00:25:59.652 }' 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:59.652 20:24:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.910 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:59.910 ... 00:25:59.910 fio-3.35 00:25:59.910 Starting 3 threads 00:25:59.910 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.467 00:26:06.467 filename0: (groupid=0, jobs=1): err= 0: pid=333039: Thu May 16 20:24:52 2024 00:26:06.467 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(149MiB/5045msec) 00:26:06.467 slat (nsec): min=4716, max=87108, avg=18102.60, stdev=5562.30 00:26:06.467 clat (usec): min=7341, max=93090, avg=12660.76, stdev=8243.46 00:26:06.467 lat (usec): min=7350, max=93109, avg=12678.87, stdev=8242.96 00:26:06.467 clat percentiles (usec): 00:26:06.467 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:26:06.467 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:26:06.467 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12780], 95.00th=[13566], 00:26:06.467 | 99.00th=[52691], 99.50th=[53740], 99.90th=[91751], 99.95th=[92799], 00:26:06.467 | 99.99th=[92799] 00:26:06.467 bw ( KiB/s): min=11008, max=36096, per=35.71%, avg=30387.20, stdev=7580.80, samples=10 00:26:06.467 iops : min= 86, max= 282, avg=237.40, stdev=59.22, samples=10 00:26:06.467 lat (msec) : 10=11.43%, 20=85.21%, 50=0.92%, 100=2.44% 00:26:06.467 cpu : usr=93.56%, sys=5.73%, ctx=50, majf=0, minf=197 00:26:06.467 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.467 issued rwts: total=1190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.467 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:06.467 filename0: (groupid=0, jobs=1): err= 0: pid=333040: Thu May 16 20:24:52 2024 00:26:06.467 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(143MiB/5046msec) 00:26:06.467 slat (nsec): min=4801, max=53827, avg=18352.31, stdev=5373.18 00:26:06.467 clat (usec): min=6518, max=52487, avg=13172.80, stdev=3961.38 00:26:06.467 lat (usec): min=6533, max=52508, avg=13191.15, stdev=3962.01 00:26:06.468 clat percentiles (usec): 00:26:06.468 | 1.00th=[ 7111], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[11469], 00:26:06.468 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13698], 00:26:06.468 | 70.00th=[14484], 80.00th=[15139], 90.00th=[15926], 95.00th=[16450], 00:26:06.468 | 99.00th=[18744], 99.50th=[47973], 99.90th=[49546], 99.95th=[52691], 00:26:06.468 | 99.99th=[52691] 00:26:06.468 bw ( KiB/s): min=25856, max=33536, per=34.32%, avg=29209.60, stdev=2236.48, samples=10 00:26:06.468 iops : min= 202, max= 262, avg=228.20, stdev=17.47, samples=10 00:26:06.468 lat (msec) : 10=15.03%, 20=84.00%, 50=0.87%, 100=0.09% 00:26:06.468 cpu : usr=91.64%, sys=6.12%, ctx=290, majf=0, minf=86 00:26:06.468 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.468 issued rwts: total=1144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.468 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:06.468 filename0: (groupid=0, jobs=1): err= 0: pid=333041: Thu May 16 20:24:52 2024 00:26:06.468 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(128MiB/5046msec) 00:26:06.468 slat (nsec): min=4826, max=38167, avg=16535.55, stdev=3605.77 00:26:06.468 clat (usec): min=4279, max=50382, avg=14764.12, stdev=3869.42 00:26:06.468 lat (usec): min=4292, max=50403, avg=14780.65, stdev=3870.21 00:26:06.468 clat percentiles (usec): 00:26:06.468 | 1.00th=[ 5276], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[12518], 00:26:06.468 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:26:06.468 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:26:06.468 | 99.00th=[19268], 99.50th=[26084], 99.90th=[49021], 99.95th=[50594], 00:26:06.468 | 99.99th=[50594] 00:26:06.468 bw ( KiB/s): min=23040, max=34372, per=30.63%, avg=26067.60, stdev=3439.65, samples=10 00:26:06.468 iops : min= 180, max= 268, avg=203.60, stdev=26.73, samples=10 00:26:06.468 lat (msec) : 10=13.32%, 20=85.90%, 50=0.69%, 100=0.10% 00:26:06.468 cpu : usr=95.54%, sys=3.94%, ctx=23, majf=0, minf=52 00:26:06.468 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.468 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.468 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:06.468 00:26:06.468 Run status group 0 (all jobs): 00:26:06.468 READ: bw=83.1MiB/s (87.1MB/s), 25.3MiB/s-29.5MiB/s (26.5MB/s-30.9MB/s), io=419MiB (440MB), run=5045-5046msec 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 bdev_null0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 [2024-05-16 20:24:52.739087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 bdev_null1 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 bdev_null2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.468 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.468 { 00:26:06.468 "params": { 00:26:06.468 "name": "Nvme$subsystem", 00:26:06.468 "trtype": "$TEST_TRANSPORT", 00:26:06.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.468 "adrfam": "ipv4", 00:26:06.468 "trsvcid": "$NVMF_PORT", 00:26:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.468 "hdgst": ${hdgst:-false}, 00:26:06.468 "ddgst": ${ddgst:-false} 00:26:06.468 }, 00:26:06.468 "method": "bdev_nvme_attach_controller" 00:26:06.469 } 00:26:06.469 EOF 00:26:06.469 )") 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.469 { 00:26:06.469 "params": { 00:26:06.469 "name": "Nvme$subsystem", 00:26:06.469 "trtype": "$TEST_TRANSPORT", 00:26:06.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.469 "adrfam": "ipv4", 00:26:06.469 "trsvcid": "$NVMF_PORT", 00:26:06.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.469 "hdgst": ${hdgst:-false}, 00:26:06.469 "ddgst": ${ddgst:-false} 00:26:06.469 }, 00:26:06.469 "method": "bdev_nvme_attach_controller" 00:26:06.469 } 00:26:06.469 EOF 00:26:06.469 )") 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.469 { 00:26:06.469 "params": { 00:26:06.469 "name": "Nvme$subsystem", 00:26:06.469 "trtype": "$TEST_TRANSPORT", 00:26:06.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.469 "adrfam": "ipv4", 00:26:06.469 "trsvcid": "$NVMF_PORT", 00:26:06.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.469 "hdgst": ${hdgst:-false}, 00:26:06.469 "ddgst": ${ddgst:-false} 00:26:06.469 }, 00:26:06.469 "method": "bdev_nvme_attach_controller" 00:26:06.469 } 00:26:06.469 EOF 00:26:06.469 )") 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:06.469 "params": { 00:26:06.469 "name": "Nvme0", 00:26:06.469 "trtype": "tcp", 00:26:06.469 "traddr": "10.0.0.2", 00:26:06.469 "adrfam": "ipv4", 00:26:06.469 "trsvcid": "4420", 00:26:06.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.469 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.469 "hdgst": false, 00:26:06.469 "ddgst": false 00:26:06.469 }, 00:26:06.469 "method": "bdev_nvme_attach_controller" 00:26:06.469 },{ 00:26:06.469 "params": { 00:26:06.469 "name": "Nvme1", 00:26:06.469 "trtype": "tcp", 00:26:06.469 "traddr": "10.0.0.2", 00:26:06.469 "adrfam": "ipv4", 00:26:06.469 "trsvcid": "4420", 00:26:06.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.469 "hdgst": false, 00:26:06.469 "ddgst": false 00:26:06.469 }, 00:26:06.469 "method": "bdev_nvme_attach_controller" 00:26:06.469 },{ 00:26:06.469 "params": { 00:26:06.469 "name": "Nvme2", 00:26:06.469 "trtype": "tcp", 00:26:06.469 "traddr": "10.0.0.2", 00:26:06.469 "adrfam": "ipv4", 00:26:06.469 "trsvcid": "4420", 00:26:06.469 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:06.469 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:06.469 "hdgst": false, 00:26:06.469 "ddgst": false 00:26:06.469 }, 00:26:06.469 "method": "bdev_nvme_attach_controller" 00:26:06.469 }' 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:06.469 20:24:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.469 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:06.469 ... 00:26:06.469 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:06.469 ... 00:26:06.469 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:06.469 ... 00:26:06.469 fio-3.35 00:26:06.469 Starting 24 threads 00:26:06.469 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.683 00:26:18.683 filename0: (groupid=0, jobs=1): err= 0: pid=333906: Thu May 16 20:25:04 2024 00:26:18.683 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:26:18.683 slat (usec): min=8, max=127, avg=43.78, stdev=22.43 00:26:18.683 clat (usec): min=13788, max=36912, avg=33298.16, stdev=1538.97 00:26:18.683 lat (usec): min=13803, max=36934, avg=33341.93, stdev=1538.69 00:26:18.683 clat percentiles (usec): 00:26:18.683 | 1.00th=[28181], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:26:18.683 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.683 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.683 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:26:18.683 | 99.99th=[36963] 00:26:18.683 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1899.95, stdev=47.58, samples=19 00:26:18.683 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:26:18.683 lat (msec) : 20=0.53%, 50=99.47% 00:26:18.683 cpu : usr=98.34%, sys=1.25%, ctx=14, majf=0, minf=65 00:26:18.683 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:18.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.683 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.683 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.683 filename0: (groupid=0, jobs=1): err= 0: pid=333907: Thu May 16 20:25:04 2024 00:26:18.683 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:26:18.683 slat (usec): min=6, max=131, avg=30.45, stdev=25.81 00:26:18.683 clat (usec): min=7945, max=46819, avg=33337.02, stdev=2306.76 00:26:18.683 lat (usec): min=7953, max=46873, avg=33367.47, stdev=2306.07 00:26:18.683 clat percentiles (usec): 00:26:18.683 | 1.00th=[20841], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:26:18.683 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:18.684 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.684 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46400], 99.95th=[46400], 00:26:18.684 | 99.99th=[46924] 00:26:18.684 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1900.80, stdev=62.64, samples=20 00:26:18.684 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:26:18.684 lat (msec) : 10=0.34%, 20=0.34%, 50=99.33% 00:26:18.684 cpu : usr=97.41%, sys=1.69%, ctx=125, majf=0, minf=110 00:26:18.684 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:18.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.684 filename0: (groupid=0, jobs=1): err= 0: pid=333908: Thu May 16 20:25:04 2024 00:26:18.684 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:26:18.684 slat (usec): min=9, max=110, avg=55.21, stdev=23.76 00:26:18.684 clat (usec): min=13692, max=54401, avg=33281.80, stdev=1797.90 00:26:18.684 lat (usec): min=13716, max=54434, avg=33337.01, stdev=1797.69 00:26:18.684 clat percentiles (usec): 00:26:18.684 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:26:18.684 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:18.684 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:18.684 | 99.00th=[35390], 99.50th=[36439], 99.90th=[54264], 99.95th=[54264], 00:26:18.684 | 99.99th=[54264] 00:26:18.684 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:26:18.684 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.684 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:18.684 cpu : usr=98.24%, sys=1.35%, ctx=15, majf=0, minf=75 00:26:18.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.684 filename0: (groupid=0, jobs=1): err= 0: pid=333909: Thu May 16 20:25:04 2024 00:26:18.684 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:26:18.684 slat (usec): min=8, max=115, avg=46.14, stdev=26.70 00:26:18.684 clat (usec): min=25955, max=65711, avg=33501.24, stdev=1989.95 00:26:18.684 lat (usec): min=26031, max=65826, avg=33547.39, stdev=1986.91 00:26:18.684 clat percentiles (usec): 00:26:18.684 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:26:18.684 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.684 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.684 | 99.00th=[35914], 99.50th=[36439], 99.90th=[65274], 99.95th=[65799], 00:26:18.684 | 99.99th=[65799] 00:26:18.684 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:26:18.684 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.684 lat (msec) : 50=99.66%, 100=0.34% 00:26:18.684 cpu : usr=98.33%, sys=1.28%, ctx=15, majf=0, minf=70 00:26:18.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.684 filename0: (groupid=0, jobs=1): err= 0: pid=333910: Thu May 16 20:25:04 2024 00:26:18.684 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:26:18.684 slat (usec): min=9, max=128, avg=45.32, stdev=16.66 00:26:18.684 clat (usec): min=26366, max=47070, avg=33448.19, stdev=1011.06 00:26:18.684 lat (usec): min=26405, max=47087, avg=33493.52, stdev=1010.35 00:26:18.684 clat percentiles (usec): 00:26:18.684 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:26:18.684 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.684 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:18.684 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:26:18.684 | 99.99th=[46924] 00:26:18.684 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.684 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.684 lat (msec) : 50=100.00% 00:26:18.684 cpu : usr=97.42%, sys=1.79%, ctx=128, majf=0, minf=65 00:26:18.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.684 filename0: (groupid=0, jobs=1): err= 0: pid=333911: Thu May 16 20:25:04 2024 00:26:18.684 read: IOPS=473, BW=1896KiB/s (1941kB/s)(18.5MiB/10005msec) 00:26:18.684 slat (usec): min=8, max=114, avg=41.30, stdev=17.24 00:26:18.684 clat (usec): min=10520, max=56437, avg=33365.04, stdev=2388.82 00:26:18.684 lat (usec): min=10542, max=56467, avg=33406.34, stdev=2389.56 00:26:18.684 clat percentiles (usec): 00:26:18.684 | 1.00th=[26608], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:18.684 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:18.684 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.684 | 99.00th=[36439], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:26:18.684 | 99.99th=[56361] 00:26:18.684 bw ( KiB/s): min= 1664, max= 1968, per=4.16%, avg=1888.84, stdev=73.99, samples=19 00:26:18.684 iops : min= 416, max= 492, avg=472.21, stdev=18.50, samples=19 00:26:18.684 lat (msec) : 20=0.55%, 50=98.90%, 100=0.55% 00:26:18.684 cpu : usr=98.11%, sys=1.31%, ctx=66, majf=0, minf=68 00:26:18.684 IO depths : 1=5.7%, 2=11.7%, 4=24.0%, 8=51.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:18.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.684 filename0: (groupid=0, jobs=1): err= 0: pid=333912: Thu May 16 20:25:04 2024 00:26:18.684 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:26:18.684 slat (usec): min=8, max=106, avg=38.01, stdev=15.59 00:26:18.684 clat (usec): min=21801, max=58481, avg=33531.48, stdev=1155.96 00:26:18.685 lat (usec): min=21813, max=58508, avg=33569.49, stdev=1154.25 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:18.685 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.685 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.685 | 99.00th=[35390], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:26:18.685 | 99.99th=[58459] 00:26:18.685 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.685 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.685 lat (msec) : 50=99.96%, 100=0.04% 00:26:18.685 cpu : usr=98.31%, sys=1.29%, ctx=33, majf=0, minf=51 00:26:18.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:18.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.685 filename0: (groupid=0, jobs=1): err= 0: pid=333913: Thu May 16 20:25:04 2024 00:26:18.685 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:26:18.685 slat (nsec): min=9075, max=87362, avg=33954.15, stdev=11474.91 00:26:18.685 clat (usec): min=13703, max=54488, avg=33494.84, stdev=1769.96 00:26:18.685 lat (usec): min=13725, max=54506, avg=33528.80, stdev=1770.61 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:18.685 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.685 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.685 | 99.00th=[35390], 99.50th=[36439], 99.90th=[54264], 99.95th=[54264], 00:26:18.685 | 99.99th=[54264] 00:26:18.685 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:26:18.685 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.685 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:18.685 cpu : usr=97.01%, sys=2.08%, ctx=105, majf=0, minf=79 00:26:18.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.685 filename1: (groupid=0, jobs=1): err= 0: pid=333914: Thu May 16 20:25:04 2024 00:26:18.685 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:26:18.685 slat (nsec): min=8452, max=63045, avg=28310.52, stdev=9004.87 00:26:18.685 clat (usec): min=14994, max=36807, avg=33439.88, stdev=1539.54 00:26:18.685 lat (usec): min=15010, max=36841, avg=33468.19, stdev=1539.33 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[26608], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:18.685 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.685 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.685 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:26:18.685 | 99.99th=[36963] 00:26:18.685 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1899.95, stdev=47.58, samples=19 00:26:18.685 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:26:18.685 lat (msec) : 20=0.67%, 50=99.33% 00:26:18.685 cpu : usr=97.98%, sys=1.54%, ctx=36, majf=0, minf=132 00:26:18.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.685 filename1: (groupid=0, jobs=1): err= 0: pid=333915: Thu May 16 20:25:04 2024 00:26:18.685 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:26:18.685 slat (usec): min=14, max=113, avg=40.92, stdev=14.11 00:26:18.685 clat (usec): min=26399, max=46782, avg=33469.97, stdev=1001.98 00:26:18.685 lat (usec): min=26434, max=46804, avg=33510.89, stdev=1001.44 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:18.685 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.685 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.685 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:26:18.685 | 99.99th=[46924] 00:26:18.685 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.685 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.685 lat (msec) : 50=100.00% 00:26:18.685 cpu : usr=98.01%, sys=1.58%, ctx=20, majf=0, minf=62 00:26:18.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.685 filename1: (groupid=0, jobs=1): err= 0: pid=333916: Thu May 16 20:25:04 2024 00:26:18.685 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:26:18.685 slat (usec): min=8, max=152, avg=69.43, stdev=24.91 00:26:18.685 clat (usec): min=7478, max=36935, avg=32986.38, stdev=2209.91 00:26:18.685 lat (usec): min=7486, max=36952, avg=33055.81, stdev=2212.18 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[20579], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:18.685 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:26:18.685 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:18.685 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:26:18.685 | 99.99th=[36963] 00:26:18.685 bw ( KiB/s): min= 1792, max= 2052, per=4.19%, avg=1901.00, stdev=63.14, samples=20 00:26:18.685 iops : min= 448, max= 513, avg=475.25, stdev=15.78, samples=20 00:26:18.685 lat (msec) : 10=0.34%, 20=0.44%, 50=99.22% 00:26:18.685 cpu : usr=98.56%, sys=1.01%, ctx=19, majf=0, minf=63 00:26:18.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.685 filename1: (groupid=0, jobs=1): err= 0: pid=333917: Thu May 16 20:25:04 2024 00:26:18.685 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10010msec) 00:26:18.685 slat (usec): min=9, max=140, avg=40.63, stdev=17.41 00:26:18.685 clat (usec): min=12294, max=59849, avg=33425.23, stdev=1925.61 00:26:18.685 lat (usec): min=12312, max=59879, avg=33465.86, stdev=1926.57 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:18.685 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.685 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.685 | 99.00th=[35390], 99.50th=[36439], 99.90th=[56361], 99.95th=[56361], 00:26:18.685 | 99.99th=[60031] 00:26:18.685 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:26:18.685 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.685 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:18.685 cpu : usr=97.50%, sys=1.66%, ctx=165, majf=0, minf=70 00:26:18.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:18.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.685 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.685 filename1: (groupid=0, jobs=1): err= 0: pid=333918: Thu May 16 20:25:04 2024 00:26:18.685 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10003msec) 00:26:18.685 slat (usec): min=9, max=107, avg=36.48, stdev=12.83 00:26:18.685 clat (usec): min=26749, max=65593, avg=33582.61, stdev=1939.28 00:26:18.685 lat (usec): min=26783, max=65656, avg=33619.08, stdev=1941.90 00:26:18.685 clat percentiles (usec): 00:26:18.685 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:18.685 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.685 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.685 | 99.00th=[35390], 99.50th=[36439], 99.90th=[65274], 99.95th=[65274], 00:26:18.685 | 99.99th=[65799] 00:26:18.685 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1886.47, stdev=71.42, samples=19 00:26:18.685 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.686 lat (msec) : 50=99.66%, 100=0.34% 00:26:18.686 cpu : usr=98.13%, sys=1.49%, ctx=27, majf=0, minf=68 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename1: (groupid=0, jobs=1): err= 0: pid=333919: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10005msec) 00:26:18.686 slat (usec): min=10, max=137, avg=45.19, stdev=15.49 00:26:18.686 clat (usec): min=12457, max=56538, avg=33377.12, stdev=1904.47 00:26:18.686 lat (usec): min=12500, max=56567, avg=33422.31, stdev=1905.16 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:26:18.686 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.686 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:18.686 | 99.00th=[34866], 99.50th=[36439], 99.90th=[56361], 99.95th=[56361], 00:26:18.686 | 99.99th=[56361] 00:26:18.686 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:26:18.686 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.686 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:18.686 cpu : usr=96.51%, sys=2.22%, ctx=256, majf=0, minf=61 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename1: (groupid=0, jobs=1): err= 0: pid=333920: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10020msec) 00:26:18.686 slat (usec): min=14, max=103, avg=43.63, stdev=14.29 00:26:18.686 clat (usec): min=26359, max=46829, avg=33437.97, stdev=1001.16 00:26:18.686 lat (usec): min=26394, max=46856, avg=33481.60, stdev=1000.66 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:26:18.686 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.686 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.686 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:26:18.686 | 99.99th=[46924] 00:26:18.686 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.686 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.686 lat (msec) : 50=100.00% 00:26:18.686 cpu : usr=97.63%, sys=1.67%, ctx=235, majf=0, minf=72 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename1: (groupid=0, jobs=1): err= 0: pid=333921: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:26:18.686 slat (nsec): min=8562, max=87017, avg=38357.63, stdev=13016.04 00:26:18.686 clat (usec): min=26502, max=46749, avg=33529.58, stdev=995.56 00:26:18.686 lat (usec): min=26538, max=46771, avg=33567.94, stdev=992.99 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:18.686 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.686 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.686 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:26:18.686 | 99.99th=[46924] 00:26:18.686 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.686 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.686 lat (msec) : 50=100.00% 00:26:18.686 cpu : usr=97.66%, sys=1.74%, ctx=61, majf=0, minf=82 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename2: (groupid=0, jobs=1): err= 0: pid=333922: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:26:18.686 slat (nsec): min=8953, max=64705, avg=27670.34, stdev=8800.12 00:26:18.686 clat (usec): min=14955, max=36923, avg=33445.91, stdev=1538.39 00:26:18.686 lat (usec): min=14970, max=36952, avg=33473.58, stdev=1538.01 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[26608], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:18.686 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.686 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.686 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:26:18.686 | 99.99th=[36963] 00:26:18.686 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1899.95, stdev=47.58, samples=19 00:26:18.686 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:26:18.686 lat (msec) : 20=0.67%, 50=99.33% 00:26:18.686 cpu : usr=97.67%, sys=1.62%, ctx=98, majf=0, minf=102 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename2: (groupid=0, jobs=1): err= 0: pid=333923: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:26:18.686 slat (nsec): min=6561, max=93532, avg=37967.66, stdev=15401.72 00:26:18.686 clat (usec): min=6996, max=36890, avg=33286.58, stdev=2192.84 00:26:18.686 lat (usec): min=7004, max=36932, avg=33324.55, stdev=2194.18 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[20841], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:18.686 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.686 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.686 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:26:18.686 | 99.99th=[36963] 00:26:18.686 bw ( KiB/s): min= 1792, max= 2052, per=4.19%, avg=1901.00, stdev=63.14, samples=20 00:26:18.686 iops : min= 448, max= 513, avg=475.25, stdev=15.78, samples=20 00:26:18.686 lat (msec) : 10=0.34%, 20=0.34%, 50=99.33% 00:26:18.686 cpu : usr=97.39%, sys=1.70%, ctx=138, majf=0, minf=92 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename2: (groupid=0, jobs=1): err= 0: pid=333924: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10020msec) 00:26:18.686 slat (usec): min=8, max=105, avg=32.78, stdev=29.48 00:26:18.686 clat (usec): min=23467, max=59128, avg=33550.94, stdev=1694.83 00:26:18.686 lat (usec): min=23479, max=59160, avg=33583.72, stdev=1690.82 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:26:18.686 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:18.686 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.686 | 99.00th=[35390], 99.50th=[35390], 99.90th=[58983], 99.95th=[58983], 00:26:18.686 | 99.99th=[58983] 00:26:18.686 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1888.00, stdev=70.42, samples=20 00:26:18.686 iops : min= 416, max= 480, avg=472.00, stdev=17.60, samples=20 00:26:18.686 lat (msec) : 50=99.66%, 100=0.34% 00:26:18.686 cpu : usr=98.38%, sys=1.21%, ctx=28, majf=0, minf=101 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename2: (groupid=0, jobs=1): err= 0: pid=333925: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10006msec) 00:26:18.686 slat (usec): min=7, max=124, avg=53.49, stdev=20.87 00:26:18.686 clat (usec): min=12294, max=56966, avg=33306.43, stdev=1964.87 00:26:18.686 lat (usec): min=12312, max=56982, avg=33359.93, stdev=1964.09 00:26:18.686 clat percentiles (usec): 00:26:18.686 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:18.686 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:18.686 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:18.686 | 99.00th=[35390], 99.50th=[36439], 99.90th=[56886], 99.95th=[56886], 00:26:18.686 | 99.99th=[56886] 00:26:18.686 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:26:18.686 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.686 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:18.686 cpu : usr=97.82%, sys=1.53%, ctx=57, majf=0, minf=78 00:26:18.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:18.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.686 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.686 filename2: (groupid=0, jobs=1): err= 0: pid=333926: Thu May 16 20:25:04 2024 00:26:18.686 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.5MiB/10005msec) 00:26:18.686 slat (usec): min=7, max=142, avg=52.29, stdev=26.01 00:26:18.686 clat (usec): min=10214, max=86532, avg=33249.87, stdev=3095.90 00:26:18.686 lat (usec): min=10254, max=86549, avg=33302.16, stdev=3094.37 00:26:18.687 clat percentiles (usec): 00:26:18.687 | 1.00th=[23725], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:26:18.687 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:18.687 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.687 | 99.00th=[36439], 99.50th=[54789], 99.90th=[68682], 99.95th=[68682], 00:26:18.687 | 99.99th=[86508] 00:26:18.687 bw ( KiB/s): min= 1632, max= 2048, per=4.16%, avg=1891.37, stdev=85.93, samples=19 00:26:18.687 iops : min= 408, max= 512, avg=472.84, stdev=21.48, samples=19 00:26:18.687 lat (msec) : 20=0.76%, 50=98.74%, 100=0.51% 00:26:18.687 cpu : usr=96.60%, sys=2.13%, ctx=145, majf=0, minf=68 00:26:18.687 IO depths : 1=5.7%, 2=11.5%, 4=23.2%, 8=52.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:18.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 issued rwts: total=4748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.687 filename2: (groupid=0, jobs=1): err= 0: pid=333927: Thu May 16 20:25:04 2024 00:26:18.687 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:26:18.687 slat (usec): min=11, max=122, avg=48.96, stdev=16.87 00:26:18.687 clat (usec): min=19595, max=67785, avg=33404.78, stdev=1313.58 00:26:18.687 lat (usec): min=19648, max=67805, avg=33453.74, stdev=1312.06 00:26:18.687 clat percentiles (usec): 00:26:18.687 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:18.687 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.687 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:18.687 | 99.00th=[35390], 99.50th=[36963], 99.90th=[46924], 99.95th=[46924], 00:26:18.687 | 99.99th=[67634] 00:26:18.687 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.687 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.687 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:26:18.687 cpu : usr=98.35%, sys=1.25%, ctx=17, majf=0, minf=76 00:26:18.687 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:18.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.687 filename2: (groupid=0, jobs=1): err= 0: pid=333928: Thu May 16 20:25:04 2024 00:26:18.687 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:26:18.687 slat (usec): min=14, max=120, avg=44.37, stdev=16.25 00:26:18.687 clat (usec): min=26360, max=46794, avg=33428.99, stdev=1009.48 00:26:18.687 lat (usec): min=26393, max=46819, avg=33473.36, stdev=1009.13 00:26:18.687 clat percentiles (usec): 00:26:18.687 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:18.687 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:18.687 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.687 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:26:18.687 | 99.99th=[46924] 00:26:18.687 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:26:18.687 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:18.687 lat (msec) : 50=100.00% 00:26:18.687 cpu : usr=98.48%, sys=1.11%, ctx=15, majf=0, minf=78 00:26:18.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.687 filename2: (groupid=0, jobs=1): err= 0: pid=333929: Thu May 16 20:25:04 2024 00:26:18.687 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10003msec) 00:26:18.687 slat (usec): min=12, max=115, avg=40.72, stdev=17.35 00:26:18.687 clat (usec): min=26730, max=65245, avg=33545.34, stdev=1948.25 00:26:18.687 lat (usec): min=26757, max=65299, avg=33586.06, stdev=1947.05 00:26:18.687 clat percentiles (usec): 00:26:18.687 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:26:18.687 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:26:18.687 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:18.687 | 99.00th=[35390], 99.50th=[36439], 99.90th=[65274], 99.95th=[65274], 00:26:18.687 | 99.99th=[65274] 00:26:18.687 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1886.47, stdev=71.42, samples=19 00:26:18.687 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:26:18.687 lat (msec) : 50=99.66%, 100=0.34% 00:26:18.687 cpu : usr=97.85%, sys=1.42%, ctx=94, majf=0, minf=73 00:26:18.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:18.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.687 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:18.687 00:26:18.687 Run status group 0 (all jobs): 00:26:18.687 READ: bw=44.4MiB/s (46.5MB/s), 1887KiB/s-1905KiB/s (1933kB/s-1951kB/s), io=444MiB (466MB), run=10003-10020msec 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 bdev_null0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.687 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.687 [2024-05-16 20:25:04.467627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 bdev_null1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.688 { 00:26:18.688 "params": { 00:26:18.688 "name": "Nvme$subsystem", 00:26:18.688 "trtype": "$TEST_TRANSPORT", 00:26:18.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.688 "adrfam": "ipv4", 00:26:18.688 "trsvcid": "$NVMF_PORT", 00:26:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.688 "hdgst": ${hdgst:-false}, 00:26:18.688 "ddgst": ${ddgst:-false} 00:26:18.688 }, 00:26:18.688 "method": "bdev_nvme_attach_controller" 00:26:18.688 } 00:26:18.688 EOF 00:26:18.688 )") 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.688 { 00:26:18.688 "params": { 00:26:18.688 "name": "Nvme$subsystem", 00:26:18.688 "trtype": "$TEST_TRANSPORT", 00:26:18.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.688 "adrfam": "ipv4", 00:26:18.688 "trsvcid": "$NVMF_PORT", 00:26:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.688 "hdgst": ${hdgst:-false}, 00:26:18.688 "ddgst": ${ddgst:-false} 00:26:18.688 }, 00:26:18.688 "method": "bdev_nvme_attach_controller" 00:26:18.688 } 00:26:18.688 EOF 00:26:18.688 )") 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:18.688 "params": { 00:26:18.688 "name": "Nvme0", 00:26:18.688 "trtype": "tcp", 00:26:18.688 "traddr": "10.0.0.2", 00:26:18.688 "adrfam": "ipv4", 00:26:18.688 "trsvcid": "4420", 00:26:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:18.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:18.688 "hdgst": false, 00:26:18.688 "ddgst": false 00:26:18.688 }, 00:26:18.688 "method": "bdev_nvme_attach_controller" 00:26:18.688 },{ 00:26:18.688 "params": { 00:26:18.688 "name": "Nvme1", 00:26:18.688 "trtype": "tcp", 00:26:18.688 "traddr": "10.0.0.2", 00:26:18.688 "adrfam": "ipv4", 00:26:18.688 "trsvcid": "4420", 00:26:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.688 "hdgst": false, 00:26:18.688 "ddgst": false 00:26:18.688 }, 00:26:18.688 "method": "bdev_nvme_attach_controller" 00:26:18.688 }' 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:18.688 20:25:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.688 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:18.688 ... 00:26:18.688 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:18.688 ... 00:26:18.688 fio-3.35 00:26:18.688 Starting 4 threads 00:26:18.688 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.954 00:26:23.954 filename0: (groupid=0, jobs=1): err= 0: pid=335194: Thu May 16 20:25:10 2024 00:26:23.954 read: IOPS=1722, BW=13.5MiB/s (14.1MB/s)(67.3MiB/5003msec) 00:26:23.954 slat (nsec): min=6569, max=70764, avg=16098.71, stdev=8649.90 00:26:23.954 clat (usec): min=862, max=8521, avg=4588.10, stdev=814.83 00:26:23.954 lat (usec): min=881, max=8544, avg=4604.20, stdev=814.63 00:26:23.954 clat percentiles (usec): 00:26:23.954 | 1.00th=[ 2835], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 4047], 00:26:23.954 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621], 00:26:23.954 | 70.00th=[ 4752], 80.00th=[ 4948], 90.00th=[ 5604], 95.00th=[ 6259], 00:26:23.954 | 99.00th=[ 7439], 99.50th=[ 7832], 99.90th=[ 8291], 99.95th=[ 8356], 00:26:23.954 | 99.99th=[ 8586] 00:26:23.954 bw ( KiB/s): min=13344, max=14704, per=24.94%, avg=13873.78, stdev=523.83, samples=9 00:26:23.954 iops : min= 1668, max= 1838, avg=1734.22, stdev=65.48, samples=9 00:26:23.954 lat (usec) : 1000=0.01% 00:26:23.954 lat (msec) : 2=0.10%, 4=17.43%, 10=82.45% 00:26:23.954 cpu : usr=94.20%, sys=4.64%, ctx=83, majf=0, minf=0 00:26:23.954 IO depths : 1=0.3%, 2=12.9%, 4=58.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.954 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.954 issued rwts: total=8617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.954 filename0: (groupid=0, jobs=1): err= 0: pid=335195: Thu May 16 20:25:10 2024 00:26:23.954 read: IOPS=1679, BW=13.1MiB/s (13.8MB/s)(65.6MiB/5001msec) 00:26:23.954 slat (nsec): min=7242, max=70501, avg=13976.10, stdev=7781.59 00:26:23.954 clat (usec): min=840, max=8957, avg=4715.92, stdev=870.03 00:26:23.954 lat (usec): min=848, max=8970, avg=4729.89, stdev=869.85 00:26:23.954 clat percentiles (usec): 00:26:23.954 | 1.00th=[ 2835], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4178], 00:26:23.954 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:26:23.954 | 70.00th=[ 4817], 80.00th=[ 5145], 90.00th=[ 5866], 95.00th=[ 6456], 00:26:23.954 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[ 8455], 99.95th=[ 8717], 00:26:23.954 | 99.99th=[ 8979] 00:26:23.954 bw ( KiB/s): min=12656, max=14176, per=24.19%, avg=13451.78, stdev=438.10, samples=9 00:26:23.954 iops : min= 1582, max= 1772, avg=1681.44, stdev=54.77, samples=9 00:26:23.954 lat (usec) : 1000=0.07% 00:26:23.954 lat (msec) : 2=0.20%, 4=12.67%, 10=87.06% 00:26:23.954 cpu : usr=94.80%, sys=4.74%, ctx=7, majf=0, minf=9 00:26:23.954 IO depths : 1=0.2%, 2=9.8%, 4=61.6%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.954 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.954 issued rwts: total=8401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.954 filename1: (groupid=0, jobs=1): err= 0: pid=335196: Thu May 16 20:25:10 2024 00:26:23.954 read: IOPS=1904, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5002msec) 00:26:23.954 slat (usec): min=6, max=346, avg=11.04, stdev= 6.11 00:26:23.954 clat (usec): min=1400, max=7066, avg=4158.99, stdev=606.00 00:26:23.954 lat (usec): min=1409, max=7074, avg=4170.03, stdev=606.16 00:26:23.954 clat percentiles (usec): 00:26:23.954 | 1.00th=[ 2278], 5.00th=[ 2999], 10.00th=[ 3425], 20.00th=[ 3752], 00:26:23.954 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359], 00:26:23.954 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5014], 00:26:23.954 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6521], 00:26:23.954 | 99.99th=[ 7046] 00:26:23.954 bw ( KiB/s): min=14624, max=16000, per=27.40%, avg=15241.60, stdev=527.50, samples=10 00:26:23.954 iops : min= 1828, max= 2000, avg=1905.20, stdev=65.94, samples=10 00:26:23.954 lat (msec) : 2=0.31%, 4=32.34%, 10=67.35% 00:26:23.955 cpu : usr=94.54%, sys=4.98%, ctx=10, majf=0, minf=2 00:26:23.955 IO depths : 1=1.1%, 2=15.7%, 4=57.6%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.955 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.955 issued rwts: total=9527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.955 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.955 filename1: (groupid=0, jobs=1): err= 0: pid=335197: Thu May 16 20:25:10 2024 00:26:23.955 read: IOPS=1646, BW=12.9MiB/s (13.5MB/s)(64.3MiB/5001msec) 00:26:23.955 slat (nsec): min=7412, max=68116, avg=14299.76, stdev=8239.78 00:26:23.955 clat (usec): min=1162, max=8636, avg=4810.03, stdev=915.28 00:26:23.955 lat (usec): min=1194, max=8643, avg=4824.33, stdev=914.81 00:26:23.955 clat percentiles (usec): 00:26:23.955 | 1.00th=[ 2999], 5.00th=[ 3720], 10.00th=[ 3949], 20.00th=[ 4228], 00:26:23.955 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4752], 00:26:23.955 | 70.00th=[ 4948], 80.00th=[ 5342], 90.00th=[ 6063], 95.00th=[ 6783], 00:26:23.955 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[ 8455], 99.95th=[ 8455], 00:26:23.955 | 99.99th=[ 8586] 00:26:23.955 bw ( KiB/s): min=12496, max=13920, per=23.69%, avg=13175.11, stdev=491.08, samples=9 00:26:23.955 iops : min= 1562, max= 1740, avg=1646.89, stdev=61.38, samples=9 00:26:23.955 lat (msec) : 2=0.22%, 4=11.30%, 10=88.48% 00:26:23.955 cpu : usr=94.98%, sys=4.54%, ctx=6, majf=0, minf=9 00:26:23.955 IO depths : 1=0.1%, 2=8.5%, 4=62.5%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.955 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.955 issued rwts: total=8236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.955 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.955 00:26:23.955 Run status group 0 (all jobs): 00:26:23.955 READ: bw=54.3MiB/s (57.0MB/s), 12.9MiB/s-14.9MiB/s (13.5MB/s-15.6MB/s), io=272MiB (285MB), run=5001-5003msec 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 00:26:23.955 real 0m24.163s 00:26:23.955 user 4m32.697s 00:26:23.955 sys 0m6.415s 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 ************************************ 00:26:23.955 END TEST fio_dif_rand_params 00:26:23.955 ************************************ 00:26:23.955 20:25:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:23.955 20:25:10 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:23.955 20:25:10 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 ************************************ 00:26:23.955 START TEST fio_dif_digest 00:26:23.955 ************************************ 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 bdev_null0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.955 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:23.956 [2024-05-16 20:25:10.789747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.956 { 00:26:23.956 "params": { 00:26:23.956 "name": "Nvme$subsystem", 00:26:23.956 "trtype": "$TEST_TRANSPORT", 00:26:23.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.956 "adrfam": "ipv4", 00:26:23.956 "trsvcid": "$NVMF_PORT", 00:26:23.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.956 "hdgst": ${hdgst:-false}, 00:26:23.956 "ddgst": ${ddgst:-false} 00:26:23.956 }, 00:26:23.956 "method": "bdev_nvme_attach_controller" 00:26:23.956 } 00:26:23.956 EOF 00:26:23.956 )") 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.956 "params": { 00:26:23.956 "name": "Nvme0", 00:26:23.956 "trtype": "tcp", 00:26:23.956 "traddr": "10.0.0.2", 00:26:23.956 "adrfam": "ipv4", 00:26:23.956 "trsvcid": "4420", 00:26:23.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.956 "hdgst": true, 00:26:23.956 "ddgst": true 00:26:23.956 }, 00:26:23.956 "method": "bdev_nvme_attach_controller" 00:26:23.956 }' 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:23.956 20:25:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.956 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:23.956 ... 00:26:23.956 fio-3.35 00:26:23.956 Starting 3 threads 00:26:23.956 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.264 00:26:36.264 filename0: (groupid=0, jobs=1): err= 0: pid=336065: Thu May 16 20:25:21 2024 00:26:36.264 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10048msec) 00:26:36.264 slat (nsec): min=7625, max=70262, avg=14096.24, stdev=3099.54 00:26:36.264 clat (usec): min=10961, max=52881, avg=15380.33, stdev=1591.16 00:26:36.264 lat (usec): min=10975, max=52895, avg=15394.42, stdev=1591.15 00:26:36.264 clat percentiles (usec): 00:26:36.264 | 1.00th=[12518], 5.00th=[13566], 10.00th=[13960], 20.00th=[14484], 00:26:36.264 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:26:36.264 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:26:36.264 | 99.00th=[18220], 99.50th=[19530], 99.90th=[48497], 99.95th=[52691], 00:26:36.264 | 99.99th=[52691] 00:26:36.264 bw ( KiB/s): min=24320, max=25600, per=34.06%, avg=24985.60, stdev=393.10, samples=20 00:26:36.264 iops : min= 190, max= 200, avg=195.20, stdev= 3.07, samples=20 00:26:36.264 lat (msec) : 20=99.69%, 50=0.26%, 100=0.05% 00:26:36.264 cpu : usr=92.63%, sys=6.89%, ctx=20, majf=0, minf=165 00:26:36.264 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.264 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.264 filename0: (groupid=0, jobs=1): err= 0: pid=336066: Thu May 16 20:25:21 2024 00:26:36.264 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(234MiB/10047msec) 00:26:36.264 slat (nsec): min=7260, max=36699, avg=14320.23, stdev=3065.63 00:26:36.264 clat (usec): min=12910, max=53689, avg=16033.91, stdev=1511.49 00:26:36.264 lat (usec): min=12924, max=53703, avg=16048.23, stdev=1511.53 00:26:36.264 clat percentiles (usec): 00:26:36.264 | 1.00th=[13829], 5.00th=[14484], 10.00th=[14746], 20.00th=[15270], 00:26:36.264 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:26:36.264 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17171], 95.00th=[17695], 00:26:36.264 | 99.00th=[18482], 99.50th=[18744], 99.90th=[47449], 99.95th=[53740], 00:26:36.264 | 99.99th=[53740] 00:26:36.264 bw ( KiB/s): min=23296, max=24576, per=32.68%, avg=23974.40, stdev=291.00, samples=20 00:26:36.264 iops : min= 182, max= 192, avg=187.30, stdev= 2.27, samples=20 00:26:36.264 lat (msec) : 20=99.73%, 50=0.21%, 100=0.05% 00:26:36.264 cpu : usr=92.55%, sys=6.97%, ctx=18, majf=0, minf=136 00:26:36.264 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.264 issued rwts: total=1875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.264 filename0: (groupid=0, jobs=1): err= 0: pid=336067: Thu May 16 20:25:21 2024 00:26:36.264 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(241MiB/10046msec) 00:26:36.264 slat (nsec): min=5120, max=36019, avg=13688.86, stdev=2788.49 00:26:36.264 clat (usec): min=11532, max=55339, avg=15584.06, stdev=1586.60 00:26:36.264 lat (usec): min=11545, max=55351, avg=15597.74, stdev=1586.64 00:26:36.264 clat percentiles (usec): 00:26:36.264 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14222], 20.00th=[14746], 00:26:36.264 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:26:36.264 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:26:36.264 | 99.00th=[18220], 99.50th=[18744], 99.90th=[48497], 99.95th=[55313], 00:26:36.264 | 99.99th=[55313] 00:26:36.264 bw ( KiB/s): min=24064, max=25344, per=33.62%, avg=24668.00, stdev=322.09, samples=20 00:26:36.264 iops : min= 188, max= 198, avg=192.70, stdev= 2.54, samples=20 00:26:36.264 lat (msec) : 20=99.84%, 50=0.10%, 100=0.05% 00:26:36.264 cpu : usr=92.49%, sys=7.04%, ctx=20, majf=0, minf=142 00:26:36.264 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.264 issued rwts: total=1929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.264 00:26:36.264 Run status group 0 (all jobs): 00:26:36.264 READ: bw=71.6MiB/s (75.1MB/s), 23.3MiB/s-24.3MiB/s (24.5MB/s-25.5MB/s), io=720MiB (755MB), run=10046-10048msec 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.264 00:26:36.264 real 0m11.299s 00:26:36.264 user 0m29.195s 00:26:36.264 sys 0m2.386s 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:36.264 20:25:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:36.264 ************************************ 00:26:36.264 END TEST fio_dif_digest 00:26:36.264 ************************************ 00:26:36.264 20:25:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:36.264 20:25:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.264 rmmod nvme_tcp 00:26:36.264 rmmod nvme_fabrics 00:26:36.264 rmmod nvme_keyring 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 330013 ']' 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 330013 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 330013 ']' 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 330013 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 330013 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 330013' 00:26:36.264 killing process with pid 330013 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@965 -- # kill 330013 00:26:36.264 [2024-05-16 20:25:22.167551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:36.264 20:25:22 nvmf_dif -- common/autotest_common.sh@970 -- # wait 330013 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:36.264 20:25:22 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:36.522 Waiting for block devices as requested 00:26:36.522 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:36.522 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:36.522 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:36.522 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:36.780 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:36.780 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:36.780 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:36.780 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:37.039 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:37.039 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:37.039 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:37.298 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:37.298 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:37.298 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:37.298 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:37.556 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:37.556 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:37.556 20:25:24 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.556 20:25:24 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.556 20:25:24 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.556 20:25:24 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.556 20:25:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.556 20:25:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:37.556 20:25:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.084 20:25:26 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.084 00:26:40.084 real 1m6.713s 00:26:40.084 user 6m29.074s 00:26:40.084 sys 0m18.320s 00:26:40.084 20:25:26 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:40.084 20:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.084 ************************************ 00:26:40.084 END TEST nvmf_dif 00:26:40.084 ************************************ 00:26:40.084 20:25:26 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:40.084 20:25:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:40.084 20:25:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:40.084 20:25:26 -- common/autotest_common.sh@10 -- # set +x 00:26:40.084 ************************************ 00:26:40.084 START TEST nvmf_abort_qd_sizes 00:26:40.084 ************************************ 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:40.084 * Looking for test storage... 00:26:40.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.084 20:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.085 20:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.986 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:41.987 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:41.987 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:41.987 Found net devices under 0000:09:00.0: cvl_0_0 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:41.987 Found net devices under 0000:09:00.1: cvl_0_1 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:26:41.987 00:26:41.987 --- 10.0.0.2 ping statistics --- 00:26:41.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.987 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:26:41.987 00:26:41.987 --- 10.0.0.1 ping statistics --- 00:26:41.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.987 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:41.987 20:25:28 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:42.923 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:42.923 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:42.923 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:43.858 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=340865 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 340865 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 340865 ']' 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:44.116 20:25:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:44.117 [2024-05-16 20:25:31.086334] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:26:44.117 [2024-05-16 20:25:31.086407] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.117 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.117 [2024-05-16 20:25:31.152335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.375 [2024-05-16 20:25:31.271310] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.375 [2024-05-16 20:25:31.271363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.375 [2024-05-16 20:25:31.271378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.375 [2024-05-16 20:25:31.271391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.375 [2024-05-16 20:25:31.271403] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.375 [2024-05-16 20:25:31.271485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.375 [2024-05-16 20:25:31.271555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.375 [2024-05-16 20:25:31.271648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.375 [2024-05-16 20:25:31.271650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:44.941 20:25:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:44.941 ************************************ 00:26:44.941 START TEST spdk_target_abort 00:26:44.941 ************************************ 00:26:44.941 20:25:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:26:44.941 20:25:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:44.941 20:25:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:26:44.941 20:25:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.941 20:25:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:48.221 spdk_targetn1 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:48.221 [2024-05-16 20:25:34.915652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:48.221 [2024-05-16 20:25:34.947662] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:48.221 [2024-05-16 20:25:34.947958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:48.221 20:25:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:48.221 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.499 Initializing NVMe Controllers 00:26:51.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:51.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:51.499 Initialization complete. Launching workers. 00:26:51.500 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12699, failed: 0 00:26:51.500 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 11440 00:26:51.500 success 727, unsuccess 532, failed 0 00:26:51.500 20:25:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:51.500 20:25:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:51.500 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.778 Initializing NVMe Controllers 00:26:54.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:54.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:54.778 Initialization complete. Launching workers. 00:26:54.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8652, failed: 0 00:26:54.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7426 00:26:54.778 success 312, unsuccess 914, failed 0 00:26:54.778 20:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:54.778 20:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:54.778 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.059 Initializing NVMe Controllers 00:26:58.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:58.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:58.059 Initialization complete. Launching workers. 00:26:58.059 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31747, failed: 0 00:26:58.059 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2630, failed to submit 29117 00:26:58.059 success 543, unsuccess 2087, failed 0 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.059 20:25:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 340865 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 340865 ']' 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 340865 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 340865 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 340865' 00:26:58.992 killing process with pid 340865 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 340865 00:26:58.992 [2024-05-16 20:25:46.066278] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:58.992 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 340865 00:26:59.251 00:26:59.251 real 0m14.259s 00:26:59.251 user 0m56.569s 00:26:59.251 sys 0m2.523s 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.251 ************************************ 00:26:59.251 END TEST spdk_target_abort 00:26:59.251 ************************************ 00:26:59.251 20:25:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:59.251 20:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:59.251 20:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:59.251 20:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.251 ************************************ 00:26:59.251 START TEST kernel_target_abort 00:26:59.251 ************************************ 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:59.251 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:59.509 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:59.509 20:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:00.445 Waiting for block devices as requested 00:27:00.445 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:00.445 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:00.445 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:00.705 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:00.705 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:00.705 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:00.705 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:00.963 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:00.963 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:00.963 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:00.963 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:01.221 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:01.221 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:01.221 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:01.221 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:01.479 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:01.479 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:01.479 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:01.737 No valid GPT data, bailing 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:01.737 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:01.738 00:27:01.738 Discovery Log Number of Records 2, Generation counter 2 00:27:01.738 =====Discovery Log Entry 0====== 00:27:01.738 trtype: tcp 00:27:01.738 adrfam: ipv4 00:27:01.738 subtype: current discovery subsystem 00:27:01.738 treq: not specified, sq flow control disable supported 00:27:01.738 portid: 1 00:27:01.738 trsvcid: 4420 00:27:01.738 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:01.738 traddr: 10.0.0.1 00:27:01.738 eflags: none 00:27:01.738 sectype: none 00:27:01.738 =====Discovery Log Entry 1====== 00:27:01.738 trtype: tcp 00:27:01.738 adrfam: ipv4 00:27:01.738 subtype: nvme subsystem 00:27:01.738 treq: not specified, sq flow control disable supported 00:27:01.738 portid: 1 00:27:01.738 trsvcid: 4420 00:27:01.738 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:01.738 traddr: 10.0.0.1 00:27:01.738 eflags: none 00:27:01.738 sectype: none 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:01.738 20:25:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:01.738 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.011 Initializing NVMe Controllers 00:27:05.011 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:05.011 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:05.011 Initialization complete. Launching workers. 00:27:05.011 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47094, failed: 0 00:27:05.011 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47094, failed to submit 0 00:27:05.011 success 0, unsuccess 47094, failed 0 00:27:05.011 20:25:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.011 20:25:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:05.011 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.290 Initializing NVMe Controllers 00:27:08.290 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.290 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:08.290 Initialization complete. Launching workers. 00:27:08.290 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90249, failed: 0 00:27:08.290 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22754, failed to submit 67495 00:27:08.290 success 0, unsuccess 22754, failed 0 00:27:08.290 20:25:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:08.290 20:25:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.290 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.817 Initializing NVMe Controllers 00:27:10.817 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:10.817 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:10.817 Initialization complete. Launching workers. 00:27:10.817 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83084, failed: 0 00:27:10.817 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20714, failed to submit 62370 00:27:10.817 success 0, unsuccess 20714, failed 0 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:10.817 20:25:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:12.192 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:12.192 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:12.192 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:13.127 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:13.127 00:27:13.127 real 0m13.744s 00:27:13.127 user 0m6.587s 00:27:13.127 sys 0m2.854s 00:27:13.127 20:26:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:13.127 20:26:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:13.127 ************************************ 00:27:13.127 END TEST kernel_target_abort 00:27:13.127 ************************************ 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:13.127 rmmod nvme_tcp 00:27:13.127 rmmod nvme_fabrics 00:27:13.127 rmmod nvme_keyring 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 340865 ']' 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 340865 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 340865 ']' 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 340865 00:27:13.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (340865) - No such process 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 340865 is not found' 00:27:13.127 Process with pid 340865 is not found 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:13.127 20:26:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:14.059 Waiting for block devices as requested 00:27:14.059 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:14.316 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:14.316 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:14.316 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:14.316 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:14.573 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:14.573 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:14.573 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:14.573 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:14.831 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:14.831 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:14.831 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:14.831 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:15.090 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:15.090 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:15.090 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:15.090 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:15.349 20:26:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.258 20:26:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.258 00:27:17.258 real 0m37.580s 00:27:17.258 user 1m5.314s 00:27:17.258 sys 0m8.451s 00:27:17.258 20:26:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:17.258 20:26:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.258 ************************************ 00:27:17.258 END TEST nvmf_abort_qd_sizes 00:27:17.258 ************************************ 00:27:17.258 20:26:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:17.258 20:26:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:17.258 20:26:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:17.258 20:26:04 -- common/autotest_common.sh@10 -- # set +x 00:27:17.258 ************************************ 00:27:17.258 START TEST keyring_file 00:27:17.258 ************************************ 00:27:17.258 20:26:04 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:17.515 * Looking for test storage... 00:27:17.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:17.515 20:26:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:17.515 20:26:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.515 20:26:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.515 20:26:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.515 20:26:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.515 20:26:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.515 20:26:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.515 20:26:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.515 20:26:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:17.515 20:26:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.515 20:26:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KsaYi6PjSF 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KsaYi6PjSF 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KsaYi6PjSF 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.KsaYi6PjSF 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sukxcV8znh 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:17.516 20:26:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sukxcV8znh 00:27:17.516 20:26:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sukxcV8znh 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sukxcV8znh 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=346535 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:17.516 20:26:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 346535 00:27:17.516 20:26:04 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 346535 ']' 00:27:17.516 20:26:04 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.516 20:26:04 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.516 20:26:04 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.516 20:26:04 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.516 20:26:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:17.516 [2024-05-16 20:26:04.600713] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:17.516 [2024-05-16 20:26:04.600818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346535 ] 00:27:17.516 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.775 [2024-05-16 20:26:04.664600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.775 [2024-05-16 20:26:04.784397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:18.033 20:26:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:18.033 [2024-05-16 20:26:05.041435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.033 null0 00:27:18.033 [2024-05-16 20:26:05.073455] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:18.033 [2024-05-16 20:26:05.073516] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:18.033 [2024-05-16 20:26:05.073935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:18.033 [2024-05-16 20:26:05.081505] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.033 20:26:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:18.033 [2024-05-16 20:26:05.093523] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:18.033 request: 00:27:18.033 { 00:27:18.033 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.033 "secure_channel": false, 00:27:18.033 "listen_address": { 00:27:18.033 "trtype": "tcp", 00:27:18.033 "traddr": "127.0.0.1", 00:27:18.033 "trsvcid": "4420" 00:27:18.033 }, 00:27:18.033 "method": "nvmf_subsystem_add_listener", 00:27:18.033 "req_id": 1 00:27:18.033 } 00:27:18.033 Got JSON-RPC error response 00:27:18.033 response: 00:27:18.033 { 00:27:18.033 "code": -32602, 00:27:18.033 "message": "Invalid parameters" 00:27:18.033 } 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.033 20:26:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=346630 00:27:18.033 20:26:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:18.033 20:26:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 346630 /var/tmp/bperf.sock 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 346630 ']' 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:18.033 20:26:05 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.034 20:26:05 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:18.034 20:26:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:18.034 [2024-05-16 20:26:05.140269] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:18.034 [2024-05-16 20:26:05.140329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346630 ] 00:27:18.034 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.291 [2024-05-16 20:26:05.199986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.292 [2024-05-16 20:26:05.316876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.224 20:26:06 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:19.224 20:26:06 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:19.224 20:26:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:19.224 20:26:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:19.224 20:26:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sukxcV8znh 00:27:19.224 20:26:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sukxcV8znh 00:27:19.481 20:26:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:19.481 20:26:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:19.481 20:26:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.481 20:26:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.481 20:26:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:19.739 20:26:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.KsaYi6PjSF == \/\t\m\p\/\t\m\p\.\K\s\a\Y\i\6\P\j\S\F ]] 00:27:19.739 20:26:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:19.739 20:26:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:19.739 20:26:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.739 20:26:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.739 20:26:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:19.997 20:26:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.sukxcV8znh == \/\t\m\p\/\t\m\p\.\s\u\k\x\c\V\8\z\n\h ]] 00:27:19.997 20:26:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:19.997 20:26:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:19.997 20:26:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:19.997 20:26:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.997 20:26:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.997 20:26:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:20.255 20:26:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:20.255 20:26:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:20.255 20:26:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:20.255 20:26:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:20.255 20:26:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:20.255 20:26:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.255 20:26:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:20.513 20:26:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:20.513 20:26:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:20.513 20:26:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:20.771 [2024-05-16 20:26:07.743924] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:20.771 nvme0n1 00:27:20.771 20:26:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:20.771 20:26:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:20.771 20:26:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:20.771 20:26:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:20.771 20:26:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.771 20:26:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:21.031 20:26:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:21.031 20:26:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:21.031 20:26:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:21.031 20:26:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:21.031 20:26:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.031 20:26:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:21.031 20:26:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:21.288 20:26:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:21.288 20:26:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:21.546 Running I/O for 1 seconds... 00:27:22.479 00:27:22.480 Latency(us) 00:27:22.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.480 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:22.480 nvme0n1 : 1.01 8573.24 33.49 0.00 0.00 14860.93 8495.41 26991.12 00:27:22.480 =================================================================================================================== 00:27:22.480 Total : 8573.24 33.49 0.00 0.00 14860.93 8495.41 26991.12 00:27:22.480 0 00:27:22.480 20:26:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:22.480 20:26:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:22.737 20:26:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:22.737 20:26:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:22.737 20:26:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.737 20:26:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.737 20:26:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.737 20:26:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:22.995 20:26:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:22.995 20:26:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:22.995 20:26:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:22.995 20:26:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.995 20:26:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.995 20:26:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.996 20:26:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:23.253 20:26:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:23.254 20:26:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.254 20:26:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:23.254 20:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:23.512 [2024-05-16 20:26:10.443957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:23.512 [2024-05-16 20:26:10.444556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c979b0 (107): Transport endpoint is not connected 00:27:23.512 [2024-05-16 20:26:10.445547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c979b0 (9): Bad file descriptor 00:27:23.512 [2024-05-16 20:26:10.446546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:23.512 [2024-05-16 20:26:10.446566] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:23.512 [2024-05-16 20:26:10.446579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:23.512 request: 00:27:23.512 { 00:27:23.512 "name": "nvme0", 00:27:23.512 "trtype": "tcp", 00:27:23.512 "traddr": "127.0.0.1", 00:27:23.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:23.512 "adrfam": "ipv4", 00:27:23.512 "trsvcid": "4420", 00:27:23.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.512 "psk": "key1", 00:27:23.512 "method": "bdev_nvme_attach_controller", 00:27:23.512 "req_id": 1 00:27:23.512 } 00:27:23.512 Got JSON-RPC error response 00:27:23.512 response: 00:27:23.512 { 00:27:23.512 "code": -32602, 00:27:23.512 "message": "Invalid parameters" 00:27:23.512 } 00:27:23.512 20:26:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:23.512 20:26:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:23.512 20:26:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:23.512 20:26:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:23.512 20:26:10 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:23.512 20:26:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:23.512 20:26:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:23.512 20:26:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:23.512 20:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.512 20:26:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:23.771 20:26:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:23.771 20:26:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:23.771 20:26:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:23.771 20:26:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:23.771 20:26:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:23.771 20:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.771 20:26:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:24.029 20:26:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:24.029 20:26:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:24.029 20:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:24.287 20:26:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:24.287 20:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:24.287 20:26:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:24.287 20:26:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:24.287 20:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:24.545 20:26:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:24.545 20:26:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.KsaYi6PjSF 00:27:24.545 20:26:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.545 20:26:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:24.803 20:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:24.803 [2024-05-16 20:26:11.912538] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KsaYi6PjSF': 0100660 00:27:24.803 [2024-05-16 20:26:11.912576] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:24.803 request: 00:27:24.803 { 00:27:24.803 "name": "key0", 00:27:24.803 "path": "/tmp/tmp.KsaYi6PjSF", 00:27:24.803 "method": "keyring_file_add_key", 00:27:24.803 "req_id": 1 00:27:24.803 } 00:27:24.803 Got JSON-RPC error response 00:27:24.803 response: 00:27:24.803 { 00:27:24.803 "code": -1, 00:27:24.803 "message": "Operation not permitted" 00:27:24.803 } 00:27:24.803 20:26:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:24.803 20:26:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.803 20:26:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.803 20:26:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.803 20:26:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.KsaYi6PjSF 00:27:24.803 20:26:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:24.803 20:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KsaYi6PjSF 00:27:25.061 20:26:12 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.KsaYi6PjSF 00:27:25.061 20:26:12 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:25.061 20:26:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:25.061 20:26:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:25.061 20:26:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:25.061 20:26:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:25.062 20:26:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:25.320 20:26:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:25.320 20:26:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:25.320 20:26:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:25.320 20:26:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:25.578 [2024-05-16 20:26:12.658586] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.KsaYi6PjSF': No such file or directory 00:27:25.579 [2024-05-16 20:26:12.658624] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:25.579 [2024-05-16 20:26:12.658659] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:25.579 [2024-05-16 20:26:12.658670] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:25.579 [2024-05-16 20:26:12.658681] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:25.579 request: 00:27:25.579 { 00:27:25.579 "name": "nvme0", 00:27:25.579 "trtype": "tcp", 00:27:25.579 "traddr": "127.0.0.1", 00:27:25.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:25.579 "adrfam": "ipv4", 00:27:25.579 "trsvcid": "4420", 00:27:25.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:25.579 "psk": "key0", 00:27:25.579 "method": "bdev_nvme_attach_controller", 00:27:25.579 "req_id": 1 00:27:25.579 } 00:27:25.579 Got JSON-RPC error response 00:27:25.579 response: 00:27:25.579 { 00:27:25.579 "code": -19, 00:27:25.579 "message": "No such device" 00:27:25.579 } 00:27:25.579 20:26:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:25.579 20:26:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:25.579 20:26:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:25.579 20:26:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:25.579 20:26:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:25.579 20:26:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:25.837 20:26:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ah3A56RQuJ 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:25.837 20:26:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:25.837 20:26:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.837 20:26:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:25.837 20:26:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:25.837 20:26:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:25.837 20:26:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ah3A56RQuJ 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ah3A56RQuJ 00:27:25.837 20:26:12 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ah3A56RQuJ 00:27:25.837 20:26:12 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ah3A56RQuJ 00:27:25.837 20:26:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ah3A56RQuJ 00:27:26.095 20:26:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:26.095 20:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:26.353 nvme0n1 00:27:26.610 20:26:13 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:26.610 20:26:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:26.610 20:26:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:26.610 20:26:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.610 20:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.610 20:26:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:26.868 20:26:13 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:26.868 20:26:13 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:26.868 20:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:26.868 20:26:14 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:26.868 20:26:14 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:26.868 20:26:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.868 20:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.868 20:26:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:27.127 20:26:14 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:27.127 20:26:14 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:27.127 20:26:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:27.127 20:26:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:27.127 20:26:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:27.127 20:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.127 20:26:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:27.385 20:26:14 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:27.385 20:26:14 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:27.385 20:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:27.643 20:26:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:27.643 20:26:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:27.643 20:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.901 20:26:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:27.901 20:26:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ah3A56RQuJ 00:27:27.901 20:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ah3A56RQuJ 00:27:28.159 20:26:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sukxcV8znh 00:27:28.159 20:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sukxcV8znh 00:27:28.416 20:26:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:28.416 20:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:28.674 nvme0n1 00:27:28.674 20:26:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:28.674 20:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:29.243 20:26:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:29.243 "subsystems": [ 00:27:29.243 { 00:27:29.243 "subsystem": "keyring", 00:27:29.243 "config": [ 00:27:29.243 { 00:27:29.243 "method": "keyring_file_add_key", 00:27:29.243 "params": { 00:27:29.243 "name": "key0", 00:27:29.243 "path": "/tmp/tmp.ah3A56RQuJ" 00:27:29.243 } 00:27:29.243 }, 00:27:29.243 { 00:27:29.243 "method": "keyring_file_add_key", 00:27:29.243 "params": { 00:27:29.243 "name": "key1", 00:27:29.243 "path": "/tmp/tmp.sukxcV8znh" 00:27:29.243 } 00:27:29.243 } 00:27:29.243 ] 00:27:29.243 }, 00:27:29.243 { 00:27:29.243 "subsystem": "iobuf", 00:27:29.243 "config": [ 00:27:29.243 { 00:27:29.243 "method": "iobuf_set_options", 00:27:29.243 "params": { 00:27:29.243 "small_pool_count": 8192, 00:27:29.243 "large_pool_count": 1024, 00:27:29.243 "small_bufsize": 8192, 00:27:29.243 "large_bufsize": 135168 00:27:29.243 } 00:27:29.243 } 00:27:29.243 ] 00:27:29.243 }, 00:27:29.243 { 00:27:29.243 "subsystem": "sock", 00:27:29.243 "config": [ 00:27:29.243 { 00:27:29.243 "method": "sock_set_default_impl", 00:27:29.243 "params": { 00:27:29.243 "impl_name": "posix" 00:27:29.243 } 00:27:29.243 }, 00:27:29.243 { 00:27:29.243 "method": "sock_impl_set_options", 00:27:29.243 "params": { 00:27:29.243 "impl_name": "ssl", 00:27:29.243 "recv_buf_size": 4096, 00:27:29.243 "send_buf_size": 4096, 00:27:29.243 "enable_recv_pipe": true, 00:27:29.243 "enable_quickack": false, 00:27:29.243 "enable_placement_id": 0, 00:27:29.243 "enable_zerocopy_send_server": true, 00:27:29.243 "enable_zerocopy_send_client": false, 00:27:29.243 "zerocopy_threshold": 0, 00:27:29.243 "tls_version": 0, 00:27:29.243 "enable_ktls": false 00:27:29.243 } 00:27:29.243 }, 00:27:29.243 { 00:27:29.243 "method": "sock_impl_set_options", 00:27:29.243 "params": { 00:27:29.243 "impl_name": "posix", 00:27:29.243 "recv_buf_size": 2097152, 00:27:29.243 "send_buf_size": 2097152, 00:27:29.243 "enable_recv_pipe": true, 00:27:29.243 "enable_quickack": false, 00:27:29.243 "enable_placement_id": 0, 00:27:29.243 "enable_zerocopy_send_server": true, 00:27:29.243 "enable_zerocopy_send_client": false, 00:27:29.243 "zerocopy_threshold": 0, 00:27:29.244 "tls_version": 0, 00:27:29.244 "enable_ktls": false 00:27:29.244 } 00:27:29.244 } 00:27:29.244 ] 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "subsystem": "vmd", 00:27:29.244 "config": [] 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "subsystem": "accel", 00:27:29.244 "config": [ 00:27:29.244 { 00:27:29.244 "method": "accel_set_options", 00:27:29.244 "params": { 00:27:29.244 "small_cache_size": 128, 00:27:29.244 "large_cache_size": 16, 00:27:29.244 "task_count": 2048, 00:27:29.244 "sequence_count": 2048, 00:27:29.244 "buf_count": 2048 00:27:29.244 } 00:27:29.244 } 00:27:29.244 ] 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "subsystem": "bdev", 00:27:29.244 "config": [ 00:27:29.244 { 00:27:29.244 "method": "bdev_set_options", 00:27:29.244 "params": { 00:27:29.244 "bdev_io_pool_size": 65535, 00:27:29.244 "bdev_io_cache_size": 256, 00:27:29.244 "bdev_auto_examine": true, 00:27:29.244 "iobuf_small_cache_size": 128, 00:27:29.244 "iobuf_large_cache_size": 16 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "bdev_raid_set_options", 00:27:29.244 "params": { 00:27:29.244 "process_window_size_kb": 1024 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "bdev_iscsi_set_options", 00:27:29.244 "params": { 00:27:29.244 "timeout_sec": 30 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "bdev_nvme_set_options", 00:27:29.244 "params": { 00:27:29.244 "action_on_timeout": "none", 00:27:29.244 "timeout_us": 0, 00:27:29.244 "timeout_admin_us": 0, 00:27:29.244 "keep_alive_timeout_ms": 10000, 00:27:29.244 "arbitration_burst": 0, 00:27:29.244 "low_priority_weight": 0, 00:27:29.244 "medium_priority_weight": 0, 00:27:29.244 "high_priority_weight": 0, 00:27:29.244 "nvme_adminq_poll_period_us": 10000, 00:27:29.244 "nvme_ioq_poll_period_us": 0, 00:27:29.244 "io_queue_requests": 512, 00:27:29.244 "delay_cmd_submit": true, 00:27:29.244 "transport_retry_count": 4, 00:27:29.244 "bdev_retry_count": 3, 00:27:29.244 "transport_ack_timeout": 0, 00:27:29.244 "ctrlr_loss_timeout_sec": 0, 00:27:29.244 "reconnect_delay_sec": 0, 00:27:29.244 "fast_io_fail_timeout_sec": 0, 00:27:29.244 "disable_auto_failback": false, 00:27:29.244 "generate_uuids": false, 00:27:29.244 "transport_tos": 0, 00:27:29.244 "nvme_error_stat": false, 00:27:29.244 "rdma_srq_size": 0, 00:27:29.244 "io_path_stat": false, 00:27:29.244 "allow_accel_sequence": false, 00:27:29.244 "rdma_max_cq_size": 0, 00:27:29.244 "rdma_cm_event_timeout_ms": 0, 00:27:29.244 "dhchap_digests": [ 00:27:29.244 "sha256", 00:27:29.244 "sha384", 00:27:29.244 "sha512" 00:27:29.244 ], 00:27:29.244 "dhchap_dhgroups": [ 00:27:29.244 "null", 00:27:29.244 "ffdhe2048", 00:27:29.244 "ffdhe3072", 00:27:29.244 "ffdhe4096", 00:27:29.244 "ffdhe6144", 00:27:29.244 "ffdhe8192" 00:27:29.244 ] 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "bdev_nvme_attach_controller", 00:27:29.244 "params": { 00:27:29.244 "name": "nvme0", 00:27:29.244 "trtype": "TCP", 00:27:29.244 "adrfam": "IPv4", 00:27:29.244 "traddr": "127.0.0.1", 00:27:29.244 "trsvcid": "4420", 00:27:29.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.244 "prchk_reftag": false, 00:27:29.244 "prchk_guard": false, 00:27:29.244 "ctrlr_loss_timeout_sec": 0, 00:27:29.244 "reconnect_delay_sec": 0, 00:27:29.244 "fast_io_fail_timeout_sec": 0, 00:27:29.244 "psk": "key0", 00:27:29.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:29.244 "hdgst": false, 00:27:29.244 "ddgst": false 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "bdev_nvme_set_hotplug", 00:27:29.244 "params": { 00:27:29.244 "period_us": 100000, 00:27:29.244 "enable": false 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "bdev_wait_for_examine" 00:27:29.244 } 00:27:29.244 ] 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "subsystem": "nbd", 00:27:29.244 "config": [] 00:27:29.244 } 00:27:29.244 ] 00:27:29.244 }' 00:27:29.244 20:26:16 keyring_file -- keyring/file.sh@114 -- # killprocess 346630 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 346630 ']' 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@950 -- # kill -0 346630 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@951 -- # uname 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 346630 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 346630' 00:27:29.244 killing process with pid 346630 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@965 -- # kill 346630 00:27:29.244 Received shutdown signal, test time was about 1.000000 seconds 00:27:29.244 00:27:29.244 Latency(us) 00:27:29.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.244 =================================================================================================================== 00:27:29.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@970 -- # wait 346630 00:27:29.244 20:26:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=348096 00:27:29.244 20:26:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 348096 /var/tmp/bperf.sock 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 348096 ']' 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.244 20:26:16 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:29.244 20:26:16 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.244 20:26:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:29.244 "subsystems": [ 00:27:29.244 { 00:27:29.244 "subsystem": "keyring", 00:27:29.244 "config": [ 00:27:29.244 { 00:27:29.244 "method": "keyring_file_add_key", 00:27:29.244 "params": { 00:27:29.244 "name": "key0", 00:27:29.244 "path": "/tmp/tmp.ah3A56RQuJ" 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "keyring_file_add_key", 00:27:29.244 "params": { 00:27:29.244 "name": "key1", 00:27:29.244 "path": "/tmp/tmp.sukxcV8znh" 00:27:29.244 } 00:27:29.244 } 00:27:29.244 ] 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "subsystem": "iobuf", 00:27:29.244 "config": [ 00:27:29.244 { 00:27:29.244 "method": "iobuf_set_options", 00:27:29.244 "params": { 00:27:29.244 "small_pool_count": 8192, 00:27:29.244 "large_pool_count": 1024, 00:27:29.244 "small_bufsize": 8192, 00:27:29.244 "large_bufsize": 135168 00:27:29.244 } 00:27:29.244 } 00:27:29.244 ] 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "subsystem": "sock", 00:27:29.244 "config": [ 00:27:29.244 { 00:27:29.244 "method": "sock_set_default_impl", 00:27:29.244 "params": { 00:27:29.244 "impl_name": "posix" 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "sock_impl_set_options", 00:27:29.244 "params": { 00:27:29.244 "impl_name": "ssl", 00:27:29.244 "recv_buf_size": 4096, 00:27:29.244 "send_buf_size": 4096, 00:27:29.244 "enable_recv_pipe": true, 00:27:29.244 "enable_quickack": false, 00:27:29.244 "enable_placement_id": 0, 00:27:29.244 "enable_zerocopy_send_server": true, 00:27:29.244 "enable_zerocopy_send_client": false, 00:27:29.244 "zerocopy_threshold": 0, 00:27:29.244 "tls_version": 0, 00:27:29.244 "enable_ktls": false 00:27:29.244 } 00:27:29.244 }, 00:27:29.244 { 00:27:29.244 "method": "sock_impl_set_options", 00:27:29.244 "params": { 00:27:29.244 "impl_name": "posix", 00:27:29.244 "recv_buf_size": 2097152, 00:27:29.244 "send_buf_size": 2097152, 00:27:29.244 "enable_recv_pipe": true, 00:27:29.244 "enable_quickack": false, 00:27:29.244 "enable_placement_id": 0, 00:27:29.244 "enable_zerocopy_send_server": true, 00:27:29.244 "enable_zerocopy_send_client": false, 00:27:29.244 "zerocopy_threshold": 0, 00:27:29.244 "tls_version": 0, 00:27:29.244 "enable_ktls": false 00:27:29.244 } 00:27:29.245 } 00:27:29.245 ] 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "subsystem": "vmd", 00:27:29.245 "config": [] 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "subsystem": "accel", 00:27:29.245 "config": [ 00:27:29.245 { 00:27:29.245 "method": "accel_set_options", 00:27:29.245 "params": { 00:27:29.245 "small_cache_size": 128, 00:27:29.245 "large_cache_size": 16, 00:27:29.245 "task_count": 2048, 00:27:29.245 "sequence_count": 2048, 00:27:29.245 "buf_count": 2048 00:27:29.245 } 00:27:29.245 } 00:27:29.245 ] 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "subsystem": "bdev", 00:27:29.245 "config": [ 00:27:29.245 { 00:27:29.245 "method": "bdev_set_options", 00:27:29.245 "params": { 00:27:29.245 "bdev_io_pool_size": 65535, 00:27:29.245 "bdev_io_cache_size": 256, 00:27:29.245 "bdev_auto_examine": true, 00:27:29.245 "iobuf_small_cache_size": 128, 00:27:29.245 "iobuf_large_cache_size": 16 00:27:29.245 } 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "method": "bdev_raid_set_options", 00:27:29.245 "params": { 00:27:29.245 "process_window_size_kb": 1024 00:27:29.245 } 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "method": "bdev_iscsi_set_options", 00:27:29.245 "params": { 00:27:29.245 "timeout_sec": 30 00:27:29.245 } 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "method": "bdev_nvme_set_options", 00:27:29.245 "params": { 00:27:29.245 "action_on_timeout": "none", 00:27:29.245 "timeout_us": 0, 00:27:29.245 "timeout_admin_us": 0, 00:27:29.245 "keep_alive_timeout_ms": 10000, 00:27:29.245 "arbitration_burst": 0, 00:27:29.245 "low_priority_weight": 0, 00:27:29.245 "medium_priority_weight": 0, 00:27:29.245 "high_priority_weight": 0, 00:27:29.245 "nvme_adminq_poll_period_us": 10000, 00:27:29.245 "nvme_ioq_poll_period_us": 0, 00:27:29.245 "io_queue_requests": 512, 00:27:29.245 "delay_cmd_submit": true, 00:27:29.245 "transport_retry_count": 4, 00:27:29.245 "bdev_retry_count": 3, 00:27:29.245 "transport_ack_timeout": 0, 00:27:29.245 "ctrlr_loss_timeout_sec": 0, 00:27:29.245 "reconnect_delay_sec": 0, 00:27:29.245 "fast_io_fail_timeout_sec": 0, 00:27:29.245 "disable_auto_failback": false, 00:27:29.245 "generate_uuids": false, 00:27:29.245 "transport_tos": 0, 00:27:29.245 "nvme_error_stat": false, 00:27:29.245 "rdma_srq_size": 0, 00:27:29.245 "io_path_stat": false, 00:27:29.245 "allow_accel_sequence": false, 00:27:29.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.245 "rdma_max_cq_size": 0, 00:27:29.245 "rdma_cm_event_timeout_ms": 0, 00:27:29.245 "dhchap_digests": [ 00:27:29.245 "sha256", 00:27:29.245 "sha384", 00:27:29.245 "sha512" 00:27:29.245 ], 00:27:29.245 "dhchap_dhgroups": [ 00:27:29.245 "null", 00:27:29.245 "ffdhe2048", 00:27:29.245 "ffdhe3072", 00:27:29.245 "ffdhe4096", 00:27:29.245 "ffdhe6144", 00:27:29.245 "ffdhe8192" 00:27:29.245 ] 00:27:29.245 } 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "method": "bdev_nvme_attach_controller", 00:27:29.245 "params": { 00:27:29.245 "name": "nvme0", 00:27:29.245 "trtype": "TCP", 00:27:29.245 "adrfam": "IPv4", 00:27:29.245 "traddr": "127.0.0.1", 00:27:29.245 "trsvcid": "4420", 00:27:29.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.245 "prchk_reftag": false, 00:27:29.245 "prchk_guard": false, 00:27:29.245 "ctrlr_loss_timeout_sec": 0, 00:27:29.245 "reconnect_delay_sec": 0, 00:27:29.245 "fast_io_fail_timeout_sec": 0, 00:27:29.245 "psk": "key0", 00:27:29.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:29.245 "hdgst": false, 00:27:29.245 "ddgst": false 00:27:29.245 } 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "method": "bdev_nvme_set_hotplug", 00:27:29.245 "params": { 00:27:29.245 "period_us": 100000, 00:27:29.245 "enable": false 00:27:29.245 } 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "method": "bdev_wait_for_examine" 00:27:29.245 } 00:27:29.245 ] 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "subsystem": "nbd", 00:27:29.245 "config": [] 00:27:29.245 } 00:27:29.245 ] 00:27:29.245 }' 00:27:29.245 20:26:16 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:29.245 20:26:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.503 [2024-05-16 20:26:16.414358] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:29.503 [2024-05-16 20:26:16.414424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348096 ] 00:27:29.503 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.503 [2024-05-16 20:26:16.474065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.503 [2024-05-16 20:26:16.591199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.759 [2024-05-16 20:26:16.772137] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:30.325 20:26:17 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:30.325 20:26:17 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:30.325 20:26:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:30.325 20:26:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:30.325 20:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.583 20:26:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:30.583 20:26:17 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:30.583 20:26:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:30.583 20:26:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.583 20:26:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.583 20:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.583 20:26:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.840 20:26:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:30.840 20:26:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:30.840 20:26:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.840 20:26:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.840 20:26:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.840 20:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.840 20:26:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.098 20:26:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:31.098 20:26:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:31.098 20:26:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:31.098 20:26:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:31.356 20:26:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:31.356 20:26:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:31.356 20:26:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ah3A56RQuJ /tmp/tmp.sukxcV8znh 00:27:31.356 20:26:18 keyring_file -- keyring/file.sh@20 -- # killprocess 348096 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 348096 ']' 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@950 -- # kill -0 348096 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@951 -- # uname 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 348096 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 348096' 00:27:31.356 killing process with pid 348096 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@965 -- # kill 348096 00:27:31.356 Received shutdown signal, test time was about 1.000000 seconds 00:27:31.356 00:27:31.356 Latency(us) 00:27:31.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.356 =================================================================================================================== 00:27:31.356 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:31.356 20:26:18 keyring_file -- common/autotest_common.sh@970 -- # wait 348096 00:27:31.614 20:26:18 keyring_file -- keyring/file.sh@21 -- # killprocess 346535 00:27:31.614 20:26:18 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 346535 ']' 00:27:31.614 20:26:18 keyring_file -- common/autotest_common.sh@950 -- # kill -0 346535 00:27:31.614 20:26:18 keyring_file -- common/autotest_common.sh@951 -- # uname 00:27:31.614 20:26:18 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:31.614 20:26:18 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 346535 00:27:31.615 20:26:18 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:31.615 20:26:18 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:31.615 20:26:18 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 346535' 00:27:31.615 killing process with pid 346535 00:27:31.615 20:26:18 keyring_file -- common/autotest_common.sh@965 -- # kill 346535 00:27:31.615 [2024-05-16 20:26:18.653251] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:31.615 [2024-05-16 20:26:18.653300] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:31.615 20:26:18 keyring_file -- common/autotest_common.sh@970 -- # wait 346535 00:27:32.182 00:27:32.182 real 0m14.689s 00:27:32.182 user 0m36.666s 00:27:32.182 sys 0m3.183s 00:27:32.182 20:26:19 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:32.182 20:26:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:32.182 ************************************ 00:27:32.182 END TEST keyring_file 00:27:32.182 ************************************ 00:27:32.182 20:26:19 -- spdk/autotest.sh@296 -- # [[ n == y ]] 00:27:32.182 20:26:19 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:32.182 20:26:19 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:32.182 20:26:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:32.182 20:26:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:32.182 20:26:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:32.182 20:26:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:32.182 20:26:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:32.182 20:26:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.182 20:26:19 -- common/autotest_common.sh@10 -- # set +x 00:27:32.182 20:26:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:32.182 20:26:19 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:27:32.182 20:26:19 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:27:32.182 20:26:19 -- common/autotest_common.sh@10 -- # set +x 00:27:34.083 INFO: APP EXITING 00:27:34.083 INFO: killing all VMs 00:27:34.083 INFO: killing vhost app 00:27:34.083 INFO: EXIT DONE 00:27:35.019 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:35.019 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:35.019 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:35.019 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:35.019 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:35.019 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:35.019 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:35.019 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:35.019 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:27:35.019 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:35.019 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:35.019 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:35.019 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:35.019 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:35.277 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:35.277 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:35.277 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:36.211 Cleaning 00:27:36.211 Removing: /var/run/dpdk/spdk0/config 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:36.211 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:36.469 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:36.469 Removing: /var/run/dpdk/spdk1/config 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:36.469 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:36.469 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:36.469 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:36.469 Removing: /var/run/dpdk/spdk2/config 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:36.469 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:36.469 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:36.469 Removing: /var/run/dpdk/spdk3/config 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:36.470 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:36.470 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:36.470 Removing: /var/run/dpdk/spdk4/config 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:36.470 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:36.470 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:36.470 Removing: /dev/shm/bdev_svc_trace.1 00:27:36.470 Removing: /dev/shm/nvmf_trace.0 00:27:36.470 Removing: /dev/shm/spdk_tgt_trace.pid90132 00:27:36.470 Removing: /var/run/dpdk/spdk0 00:27:36.470 Removing: /var/run/dpdk/spdk1 00:27:36.470 Removing: /var/run/dpdk/spdk2 00:27:36.470 Removing: /var/run/dpdk/spdk3 00:27:36.470 Removing: /var/run/dpdk/spdk4 00:27:36.470 Removing: /var/run/dpdk/spdk_pid100300 00:27:36.470 Removing: /var/run/dpdk/spdk_pid100306 00:27:36.470 Removing: /var/run/dpdk/spdk_pid100732 00:27:36.470 Removing: /var/run/dpdk/spdk_pid100870 00:27:36.470 Removing: /var/run/dpdk/spdk_pid101044 00:27:36.470 Removing: /var/run/dpdk/spdk_pid101168 00:27:36.470 Removing: /var/run/dpdk/spdk_pid101341 00:27:36.470 Removing: /var/run/dpdk/spdk_pid101351 00:27:36.470 Removing: /var/run/dpdk/spdk_pid101835 00:27:36.470 Removing: /var/run/dpdk/spdk_pid101991 00:27:36.470 Removing: /var/run/dpdk/spdk_pid102185 00:27:36.470 Removing: /var/run/dpdk/spdk_pid102377 00:27:36.470 Removing: /var/run/dpdk/spdk_pid102507 00:27:36.470 Removing: /var/run/dpdk/spdk_pid102620 00:27:36.470 Removing: /var/run/dpdk/spdk_pid102855 00:27:36.470 Removing: /var/run/dpdk/spdk_pid103007 00:27:36.470 Removing: /var/run/dpdk/spdk_pid103258 00:27:36.470 Removing: /var/run/dpdk/spdk_pid103443 00:27:36.470 Removing: /var/run/dpdk/spdk_pid103601 00:27:36.470 Removing: /var/run/dpdk/spdk_pid103874 00:27:36.470 Removing: /var/run/dpdk/spdk_pid104034 00:27:36.470 Removing: /var/run/dpdk/spdk_pid104189 00:27:36.470 Removing: /var/run/dpdk/spdk_pid104460 00:27:36.470 Removing: /var/run/dpdk/spdk_pid104620 00:27:36.470 Removing: /var/run/dpdk/spdk_pid104776 00:27:36.470 Removing: /var/run/dpdk/spdk_pid105057 00:27:36.470 Removing: /var/run/dpdk/spdk_pid105209 00:27:36.470 Removing: /var/run/dpdk/spdk_pid105375 00:27:36.470 Removing: /var/run/dpdk/spdk_pid105653 00:27:36.470 Removing: /var/run/dpdk/spdk_pid105818 00:27:36.470 Removing: /var/run/dpdk/spdk_pid105990 00:27:36.470 Removing: /var/run/dpdk/spdk_pid106274 00:27:36.470 Removing: /var/run/dpdk/spdk_pid106438 00:27:36.470 Removing: /var/run/dpdk/spdk_pid106712 00:27:36.470 Removing: /var/run/dpdk/spdk_pid106785 00:27:36.470 Removing: /var/run/dpdk/spdk_pid106989 00:27:36.470 Removing: /var/run/dpdk/spdk_pid109098 00:27:36.470 Removing: /var/run/dpdk/spdk_pid135411 00:27:36.470 Removing: /var/run/dpdk/spdk_pid138033 00:27:36.470 Removing: /var/run/dpdk/spdk_pid144995 00:27:36.470 Removing: /var/run/dpdk/spdk_pid148168 00:27:36.470 Removing: /var/run/dpdk/spdk_pid150518 00:27:36.470 Removing: /var/run/dpdk/spdk_pid151041 00:27:36.470 Removing: /var/run/dpdk/spdk_pid158298 00:27:36.470 Removing: /var/run/dpdk/spdk_pid158300 00:27:36.470 Removing: /var/run/dpdk/spdk_pid158957 00:27:36.470 Removing: /var/run/dpdk/spdk_pid159502 00:27:36.470 Removing: /var/run/dpdk/spdk_pid160158 00:27:36.470 Removing: /var/run/dpdk/spdk_pid160558 00:27:36.470 Removing: /var/run/dpdk/spdk_pid160588 00:27:36.728 Removing: /var/run/dpdk/spdk_pid160822 00:27:36.728 Removing: /var/run/dpdk/spdk_pid160961 00:27:36.728 Removing: /var/run/dpdk/spdk_pid160963 00:27:36.728 Removing: /var/run/dpdk/spdk_pid161626 00:27:36.728 Removing: /var/run/dpdk/spdk_pid162284 00:27:36.728 Removing: /var/run/dpdk/spdk_pid163162 00:27:36.728 Removing: /var/run/dpdk/spdk_pid163962 00:27:36.728 Removing: /var/run/dpdk/spdk_pid163970 00:27:36.728 Removing: /var/run/dpdk/spdk_pid164115 00:27:36.728 Removing: /var/run/dpdk/spdk_pid165087 00:27:36.728 Removing: /var/run/dpdk/spdk_pid165843 00:27:36.728 Removing: /var/run/dpdk/spdk_pid171203 00:27:36.728 Removing: /var/run/dpdk/spdk_pid171366 00:27:36.728 Removing: /var/run/dpdk/spdk_pid173994 00:27:36.728 Removing: /var/run/dpdk/spdk_pid177696 00:27:36.728 Removing: /var/run/dpdk/spdk_pid179869 00:27:36.728 Removing: /var/run/dpdk/spdk_pid186133 00:27:36.728 Removing: /var/run/dpdk/spdk_pid191325 00:27:36.728 Removing: /var/run/dpdk/spdk_pid192628 00:27:36.728 Removing: /var/run/dpdk/spdk_pid193310 00:27:36.728 Removing: /var/run/dpdk/spdk_pid204105 00:27:36.728 Removing: /var/run/dpdk/spdk_pid206299 00:27:36.728 Removing: /var/run/dpdk/spdk_pid231481 00:27:36.728 Removing: /var/run/dpdk/spdk_pid234256 00:27:36.728 Removing: /var/run/dpdk/spdk_pid235446 00:27:36.728 Removing: /var/run/dpdk/spdk_pid236645 00:27:36.728 Removing: /var/run/dpdk/spdk_pid236788 00:27:36.728 Removing: /var/run/dpdk/spdk_pid236920 00:27:36.728 Removing: /var/run/dpdk/spdk_pid237059 00:27:36.728 Removing: /var/run/dpdk/spdk_pid237377 00:27:36.728 Removing: /var/run/dpdk/spdk_pid238692 00:27:36.728 Removing: /var/run/dpdk/spdk_pid239430 00:27:36.728 Removing: /var/run/dpdk/spdk_pid239854 00:27:36.728 Removing: /var/run/dpdk/spdk_pid241472 00:27:36.728 Removing: /var/run/dpdk/spdk_pid241898 00:27:36.728 Removing: /var/run/dpdk/spdk_pid242455 00:27:36.728 Removing: /var/run/dpdk/spdk_pid244936 00:27:36.728 Removing: /var/run/dpdk/spdk_pid250887 00:27:36.728 Removing: /var/run/dpdk/spdk_pid254063 00:27:36.728 Removing: /var/run/dpdk/spdk_pid257784 00:27:36.728 Removing: /var/run/dpdk/spdk_pid258730 00:27:36.728 Removing: /var/run/dpdk/spdk_pid259816 00:27:36.728 Removing: /var/run/dpdk/spdk_pid262365 00:27:36.728 Removing: /var/run/dpdk/spdk_pid264676 00:27:36.728 Removing: /var/run/dpdk/spdk_pid269051 00:27:36.728 Removing: /var/run/dpdk/spdk_pid269073 00:27:36.728 Removing: /var/run/dpdk/spdk_pid271840 00:27:36.728 Removing: /var/run/dpdk/spdk_pid271985 00:27:36.728 Removing: /var/run/dpdk/spdk_pid272131 00:27:36.728 Removing: /var/run/dpdk/spdk_pid272496 00:27:36.728 Removing: /var/run/dpdk/spdk_pid272501 00:27:36.728 Removing: /var/run/dpdk/spdk_pid274992 00:27:36.728 Removing: /var/run/dpdk/spdk_pid275331 00:27:36.728 Removing: /var/run/dpdk/spdk_pid277870 00:27:36.728 Removing: /var/run/dpdk/spdk_pid279845 00:27:36.728 Removing: /var/run/dpdk/spdk_pid283252 00:27:36.728 Removing: /var/run/dpdk/spdk_pid286713 00:27:36.728 Removing: /var/run/dpdk/spdk_pid293573 00:27:36.728 Removing: /var/run/dpdk/spdk_pid297923 00:27:36.728 Removing: /var/run/dpdk/spdk_pid297931 00:27:36.728 Removing: /var/run/dpdk/spdk_pid310131 00:27:36.728 Removing: /var/run/dpdk/spdk_pid310553 00:27:36.728 Removing: /var/run/dpdk/spdk_pid311084 00:27:36.728 Removing: /var/run/dpdk/spdk_pid311497 00:27:36.728 Removing: /var/run/dpdk/spdk_pid312078 00:27:36.728 Removing: /var/run/dpdk/spdk_pid312538 00:27:36.728 Removing: /var/run/dpdk/spdk_pid313008 00:27:36.728 Removing: /var/run/dpdk/spdk_pid313414 00:27:36.728 Removing: /var/run/dpdk/spdk_pid315909 00:27:36.728 Removing: /var/run/dpdk/spdk_pid316054 00:27:36.728 Removing: /var/run/dpdk/spdk_pid319842 00:27:36.728 Removing: /var/run/dpdk/spdk_pid319972 00:27:36.728 Removing: /var/run/dpdk/spdk_pid321633 00:27:36.728 Removing: /var/run/dpdk/spdk_pid327163 00:27:36.728 Removing: /var/run/dpdk/spdk_pid327168 00:27:36.728 Removing: /var/run/dpdk/spdk_pid330063 00:27:36.728 Removing: /var/run/dpdk/spdk_pid331463 00:27:36.728 Removing: /var/run/dpdk/spdk_pid332870 00:27:36.728 Removing: /var/run/dpdk/spdk_pid333725 00:27:36.728 Removing: /var/run/dpdk/spdk_pid335045 00:27:36.728 Removing: /var/run/dpdk/spdk_pid335897 00:27:36.728 Removing: /var/run/dpdk/spdk_pid341292 00:27:36.728 Removing: /var/run/dpdk/spdk_pid341683 00:27:36.728 Removing: /var/run/dpdk/spdk_pid342073 00:27:36.728 Removing: /var/run/dpdk/spdk_pid343510 00:27:36.728 Removing: /var/run/dpdk/spdk_pid343902 00:27:36.728 Removing: /var/run/dpdk/spdk_pid344216 00:27:36.728 Removing: /var/run/dpdk/spdk_pid346535 00:27:36.728 Removing: /var/run/dpdk/spdk_pid346630 00:27:36.728 Removing: /var/run/dpdk/spdk_pid348096 00:27:36.728 Removing: /var/run/dpdk/spdk_pid88589 00:27:36.728 Removing: /var/run/dpdk/spdk_pid89319 00:27:36.728 Removing: /var/run/dpdk/spdk_pid90132 00:27:36.728 Removing: /var/run/dpdk/spdk_pid90560 00:27:36.728 Removing: /var/run/dpdk/spdk_pid91259 00:27:36.728 Removing: /var/run/dpdk/spdk_pid91394 00:27:36.728 Removing: /var/run/dpdk/spdk_pid92111 00:27:36.728 Removing: /var/run/dpdk/spdk_pid92127 00:27:36.728 Removing: /var/run/dpdk/spdk_pid92369 00:27:36.728 Removing: /var/run/dpdk/spdk_pid93600 00:27:36.728 Removing: /var/run/dpdk/spdk_pid94578 00:27:36.728 Removing: /var/run/dpdk/spdk_pid95353 00:27:36.728 Removing: /var/run/dpdk/spdk_pid95575 00:27:36.728 Removing: /var/run/dpdk/spdk_pid95785 00:27:36.728 Removing: /var/run/dpdk/spdk_pid95973 00:27:36.728 Removing: /var/run/dpdk/spdk_pid96144 00:27:36.728 Removing: /var/run/dpdk/spdk_pid96296 00:27:36.728 Removing: /var/run/dpdk/spdk_pid96484 00:27:36.728 Removing: /var/run/dpdk/spdk_pid97060 00:27:36.985 Removing: /var/run/dpdk/spdk_pid99412 00:27:36.985 Removing: /var/run/dpdk/spdk_pid99698 00:27:36.985 Removing: /var/run/dpdk/spdk_pid99868 00:27:36.985 Removing: /var/run/dpdk/spdk_pid99872 00:27:36.985 Clean 00:27:36.985 20:26:23 -- common/autotest_common.sh@1447 -- # return 0 00:27:36.985 20:26:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:36.985 20:26:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.985 20:26:23 -- common/autotest_common.sh@10 -- # set +x 00:27:36.985 20:26:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:36.985 20:26:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.985 20:26:23 -- common/autotest_common.sh@10 -- # set +x 00:27:36.985 20:26:23 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:36.985 20:26:23 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:36.985 20:26:23 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:36.985 20:26:23 -- spdk/autotest.sh@391 -- # hash lcov 00:27:36.985 20:26:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:36.985 20:26:23 -- spdk/autotest.sh@393 -- # hostname 00:27:36.985 20:26:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:37.242 geninfo: WARNING: invalid characters removed from testname! 00:28:09.303 20:26:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:09.303 20:26:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:11.839 20:26:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:15.123 20:27:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:17.654 20:27:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:20.938 20:27:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:23.470 20:27:10 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:23.729 20:27:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.730 20:27:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:23.730 20:27:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.730 20:27:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.730 20:27:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.730 20:27:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.730 20:27:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.730 20:27:10 -- paths/export.sh@5 -- $ export PATH 00:28:23.730 20:27:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.730 20:27:10 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:23.730 20:27:10 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:23.730 20:27:10 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715884030.XXXXXX 00:28:23.730 20:27:10 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715884030.dNY1wB 00:28:23.730 20:27:10 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:23.730 20:27:10 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:28:23.730 20:27:10 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:23.730 20:27:10 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:23.730 20:27:10 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:23.730 20:27:10 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:23.730 20:27:10 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:28:23.730 20:27:10 -- common/autotest_common.sh@10 -- $ set +x 00:28:23.730 20:27:10 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:23.730 20:27:10 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:23.730 20:27:10 -- pm/common@17 -- $ local monitor 00:28:23.730 20:27:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:23.730 20:27:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:23.730 20:27:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:23.730 20:27:10 -- pm/common@21 -- $ date +%s 00:28:23.730 20:27:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:23.730 20:27:10 -- pm/common@21 -- $ date +%s 00:28:23.730 20:27:10 -- pm/common@25 -- $ sleep 1 00:28:23.730 20:27:10 -- pm/common@21 -- $ date +%s 00:28:23.730 20:27:10 -- pm/common@21 -- $ date +%s 00:28:23.730 20:27:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884030 00:28:23.730 20:27:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884030 00:28:23.730 20:27:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884030 00:28:23.730 20:27:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884030 00:28:23.730 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884030_collect-vmstat.pm.log 00:28:23.730 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884030_collect-cpu-load.pm.log 00:28:23.730 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884030_collect-cpu-temp.pm.log 00:28:23.730 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884030_collect-bmc-pm.bmc.pm.log 00:28:24.723 20:27:11 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:28:24.723 20:27:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:24.723 20:27:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:24.723 20:27:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:24.723 20:27:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:24.723 20:27:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:24.723 20:27:11 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:24.723 20:27:11 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:24.723 20:27:11 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:24.723 20:27:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:24.723 20:27:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:24.723 20:27:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:24.723 20:27:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:24.723 20:27:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.723 20:27:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:24.723 20:27:11 -- pm/common@44 -- $ pid=358126 00:28:24.723 20:27:11 -- pm/common@50 -- $ kill -TERM 358126 00:28:24.723 20:27:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.723 20:27:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:24.723 20:27:11 -- pm/common@44 -- $ pid=358128 00:28:24.723 20:27:11 -- pm/common@50 -- $ kill -TERM 358128 00:28:24.723 20:27:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.723 20:27:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:24.723 20:27:11 -- pm/common@44 -- $ pid=358130 00:28:24.723 20:27:11 -- pm/common@50 -- $ kill -TERM 358130 00:28:24.723 20:27:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.723 20:27:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:24.723 20:27:11 -- pm/common@44 -- $ pid=358165 00:28:24.723 20:27:11 -- pm/common@50 -- $ sudo -E kill -TERM 358165 00:28:24.723 + [[ -n 4641 ]] 00:28:24.723 + sudo kill 4641 00:28:24.757 [Pipeline] } 00:28:24.779 [Pipeline] // stage 00:28:24.784 [Pipeline] } 00:28:24.801 [Pipeline] // timeout 00:28:24.807 [Pipeline] } 00:28:24.825 [Pipeline] // catchError 00:28:24.833 [Pipeline] } 00:28:24.852 [Pipeline] // wrap 00:28:24.859 [Pipeline] } 00:28:24.874 [Pipeline] // catchError 00:28:24.883 [Pipeline] stage 00:28:24.885 [Pipeline] { (Epilogue) 00:28:24.899 [Pipeline] catchError 00:28:24.901 [Pipeline] { 00:28:24.918 [Pipeline] echo 00:28:24.920 Cleanup processes 00:28:24.926 [Pipeline] sh 00:28:25.238 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:25.239 358271 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:25.239 358400 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:25.261 [Pipeline] sh 00:28:25.541 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:25.541 ++ grep -v 'sudo pgrep' 00:28:25.541 ++ awk '{print $1}' 00:28:25.541 + sudo kill -9 358271 00:28:25.556 [Pipeline] sh 00:28:25.837 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:35.818 [Pipeline] sh 00:28:36.101 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:36.101 Artifacts sizes are good 00:28:36.116 [Pipeline] archiveArtifacts 00:28:36.124 Archiving artifacts 00:28:36.638 [Pipeline] sh 00:28:36.915 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:36.929 [Pipeline] cleanWs 00:28:36.937 [WS-CLEANUP] Deleting project workspace... 00:28:36.937 [WS-CLEANUP] Deferred wipeout is used... 00:28:36.943 [WS-CLEANUP] done 00:28:36.945 [Pipeline] } 00:28:36.965 [Pipeline] // catchError 00:28:36.979 [Pipeline] sh 00:28:37.256 + logger -p user.info -t JENKINS-CI 00:28:37.268 [Pipeline] } 00:28:37.284 [Pipeline] // stage 00:28:37.290 [Pipeline] } 00:28:37.307 [Pipeline] // node 00:28:37.313 [Pipeline] End of Pipeline 00:28:37.348 Finished: SUCCESS